Package Summary
Tags | No category tags. |
Version | 2.1.30 |
License | BSD |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/jsk-ros-pkg/jsk_3rdparty.git |
VCS Type | git |
VCS Version | master |
Last Updated | 2025-05-10 |
Dev Status | DEVELOPED |
CI status | No Continuous Integration |
Released | RELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Kei Okada
- Yoshiki Obinata
Authors
- Ayaha Nagata
Emotion Analyzer Service using Hume API (ROS1)
This ROS1 package provides a service to analyze emotions from a given text using the Hume AI API.
Requirements
- ROS1 Noetic
- Python 3.8+
- An API key from Hume AI
Installation
Clone this repository and move to this directory
rosdep install -iry --from-paths .
catkin build --this
then source your workspace
Usage (Quick)
Using your microphone
roslaunch emotion_analyzer sample_emotion_analyzer.launch api_key:=<your_api_key>
Usage
1. Launch Emotion_Analyzer
roslaunch emotion_analyzer emotion_analyzer.launch api_key:=<your_api_key>
2. Call the service
For text,
rosservice call /analyze_text "text: '<text you want to analyze>'"
For prepared audio (up to 5 seconds),
rosservice call /analyze_audio "audio_file: <audio_file_path>"
As a sample, you can use '/home/leus/ros/catkin_ws/src/jsk_3rdparty/emotion_analyzer/data/purugacha_short.wav'
as
For audio from microphone,
roslaunch audio_capture capture.launch format:=wave
rosservice call /analyze_audio "audio_file: ''"
You can check the device information by arecord -l
.
Sometimes you need to replace “hw” with “plughw”:
for example, roslaunch audio_capture capture.launch format:=wave device:=plughw:1,0
.
When the device is busy, you can try fuser -v /dev/snd/*
to get PID and kill it by kill -9 <PID>
.
Changelog for package emotion_analyzer
2.1.30 (2025-05-10)
- Add Emotion Analyzer (#527)
- Contributors: Ayaha Nagata, Yoshiki Obinata
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/sample_emotion_analyzer.launch
-
- api_key [default: ]
- format [default: wave]
- launch/emotion_analyzer.launch
-
- input_audio [default: /audio/audio]
- api_key [default: ]
Messages
Services
Plugins
Recent questions tagged emotion_analyzer at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 2.1.30 |
License | BSD |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/jsk-ros-pkg/jsk_3rdparty.git |
VCS Type | git |
VCS Version | master |
Last Updated | 2025-05-10 |
Dev Status | DEVELOPED |
CI status | No Continuous Integration |
Released | RELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Kei Okada
- Yoshiki Obinata
Authors
- Ayaha Nagata
Emotion Analyzer Service using Hume API (ROS1)
This ROS1 package provides a service to analyze emotions from a given text using the Hume AI API.
Requirements
- ROS1 Noetic
- Python 3.8+
- An API key from Hume AI
Installation
Clone this repository and move to this directory
rosdep install -iry --from-paths .
catkin build --this
then source your workspace
Usage (Quick)
Using your microphone
roslaunch emotion_analyzer sample_emotion_analyzer.launch api_key:=<your_api_key>
Usage
1. Launch Emotion_Analyzer
roslaunch emotion_analyzer emotion_analyzer.launch api_key:=<your_api_key>
2. Call the service
For text,
rosservice call /analyze_text "text: '<text you want to analyze>'"
For prepared audio (up to 5 seconds),
rosservice call /analyze_audio "audio_file: <audio_file_path>"
As a sample, you can use '/home/leus/ros/catkin_ws/src/jsk_3rdparty/emotion_analyzer/data/purugacha_short.wav'
as
For audio from microphone,
roslaunch audio_capture capture.launch format:=wave
rosservice call /analyze_audio "audio_file: ''"
You can check the device information by arecord -l
.
Sometimes you need to replace “hw” with “plughw”:
for example, roslaunch audio_capture capture.launch format:=wave device:=plughw:1,0
.
When the device is busy, you can try fuser -v /dev/snd/*
to get PID and kill it by kill -9 <PID>
.
Changelog for package emotion_analyzer
2.1.30 (2025-05-10)
- Add Emotion Analyzer (#527)
- Contributors: Ayaha Nagata, Yoshiki Obinata
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/sample_emotion_analyzer.launch
-
- api_key [default: ]
- format [default: wave]
- launch/emotion_analyzer.launch
-
- input_audio [default: /audio/audio]
- api_key [default: ]
Messages
Services
Plugins
Recent questions tagged emotion_analyzer at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 2.1.30 |
License | BSD |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/jsk-ros-pkg/jsk_3rdparty.git |
VCS Type | git |
VCS Version | master |
Last Updated | 2025-05-10 |
Dev Status | DEVELOPED |
CI status | No Continuous Integration |
Released | RELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Kei Okada
- Yoshiki Obinata
Authors
- Ayaha Nagata
Emotion Analyzer Service using Hume API (ROS1)
This ROS1 package provides a service to analyze emotions from a given text using the Hume AI API.
Requirements
- ROS1 Noetic
- Python 3.8+
- An API key from Hume AI
Installation
Clone this repository and move to this directory
rosdep install -iry --from-paths .
catkin build --this
then source your workspace
Usage (Quick)
Using your microphone
roslaunch emotion_analyzer sample_emotion_analyzer.launch api_key:=<your_api_key>
Usage
1. Launch Emotion_Analyzer
roslaunch emotion_analyzer emotion_analyzer.launch api_key:=<your_api_key>
2. Call the service
For text,
rosservice call /analyze_text "text: '<text you want to analyze>'"
For prepared audio (up to 5 seconds),
rosservice call /analyze_audio "audio_file: <audio_file_path>"
As a sample, you can use '/home/leus/ros/catkin_ws/src/jsk_3rdparty/emotion_analyzer/data/purugacha_short.wav'
as
For audio from microphone,
roslaunch audio_capture capture.launch format:=wave
rosservice call /analyze_audio "audio_file: ''"
You can check the device information by arecord -l
.
Sometimes you need to replace “hw” with “plughw”:
for example, roslaunch audio_capture capture.launch format:=wave device:=plughw:1,0
.
When the device is busy, you can try fuser -v /dev/snd/*
to get PID and kill it by kill -9 <PID>
.
Changelog for package emotion_analyzer
2.1.30 (2025-05-10)
- Add Emotion Analyzer (#527)
- Contributors: Ayaha Nagata, Yoshiki Obinata
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/sample_emotion_analyzer.launch
-
- api_key [default: ]
- format [default: wave]
- launch/emotion_analyzer.launch
-
- input_audio [default: /audio/audio]
- api_key [default: ]
Messages
Services
Plugins
Recent questions tagged emotion_analyzer at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 2.1.30 |
License | BSD |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/jsk-ros-pkg/jsk_3rdparty.git |
VCS Type | git |
VCS Version | master |
Last Updated | 2025-05-10 |
Dev Status | DEVELOPED |
CI status | No Continuous Integration |
Released | RELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Kei Okada
- Yoshiki Obinata
Authors
- Ayaha Nagata
Emotion Analyzer Service using Hume API (ROS1)
This ROS1 package provides a service to analyze emotions from a given text using the Hume AI API.
Requirements
- ROS1 Noetic
- Python 3.8+
- An API key from Hume AI
Installation
Clone this repository and move to this directory
rosdep install -iry --from-paths .
catkin build --this
then source your workspace
Usage (Quick)
Using your microphone
roslaunch emotion_analyzer sample_emotion_analyzer.launch api_key:=<your_api_key>
Usage
1. Launch Emotion_Analyzer
roslaunch emotion_analyzer emotion_analyzer.launch api_key:=<your_api_key>
2. Call the service
For text,
rosservice call /analyze_text "text: '<text you want to analyze>'"
For prepared audio (up to 5 seconds),
rosservice call /analyze_audio "audio_file: <audio_file_path>"
As a sample, you can use '/home/leus/ros/catkin_ws/src/jsk_3rdparty/emotion_analyzer/data/purugacha_short.wav'
as
For audio from microphone,
roslaunch audio_capture capture.launch format:=wave
rosservice call /analyze_audio "audio_file: ''"
You can check the device information by arecord -l
.
Sometimes you need to replace “hw” with “plughw”:
for example, roslaunch audio_capture capture.launch format:=wave device:=plughw:1,0
.
When the device is busy, you can try fuser -v /dev/snd/*
to get PID and kill it by kill -9 <PID>
.
Changelog for package emotion_analyzer
2.1.30 (2025-05-10)
- Add Emotion Analyzer (#527)
- Contributors: Ayaha Nagata, Yoshiki Obinata
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/sample_emotion_analyzer.launch
-
- api_key [default: ]
- format [default: wave]
- launch/emotion_analyzer.launch
-
- input_audio [default: /audio/audio]
- api_key [default: ]
Messages
Services
Plugins
Recent questions tagged emotion_analyzer at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 2.1.30 |
License | BSD |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/jsk-ros-pkg/jsk_3rdparty.git |
VCS Type | git |
VCS Version | master |
Last Updated | 2025-05-10 |
Dev Status | DEVELOPED |
CI status | No Continuous Integration |
Released | RELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Kei Okada
- Yoshiki Obinata
Authors
- Ayaha Nagata
Emotion Analyzer Service using Hume API (ROS1)
This ROS1 package provides a service to analyze emotions from a given text using the Hume AI API.
Requirements
- ROS1 Noetic
- Python 3.8+
- An API key from Hume AI
Installation
Clone this repository and move to this directory
rosdep install -iry --from-paths .
catkin build --this
then source your workspace
Usage (Quick)
Using your microphone
roslaunch emotion_analyzer sample_emotion_analyzer.launch api_key:=<your_api_key>
Usage
1. Launch Emotion_Analyzer
roslaunch emotion_analyzer emotion_analyzer.launch api_key:=<your_api_key>
2. Call the service
For text,
rosservice call /analyze_text "text: '<text you want to analyze>'"
For prepared audio (up to 5 seconds),
rosservice call /analyze_audio "audio_file: <audio_file_path>"
As a sample, you can use '/home/leus/ros/catkin_ws/src/jsk_3rdparty/emotion_analyzer/data/purugacha_short.wav'
as
For audio from microphone,
roslaunch audio_capture capture.launch format:=wave
rosservice call /analyze_audio "audio_file: ''"
You can check the device information by arecord -l
.
Sometimes you need to replace “hw” with “plughw”:
for example, roslaunch audio_capture capture.launch format:=wave device:=plughw:1,0
.
When the device is busy, you can try fuser -v /dev/snd/*
to get PID and kill it by kill -9 <PID>
.
Changelog for package emotion_analyzer
2.1.30 (2025-05-10)
- Add Emotion Analyzer (#527)
- Contributors: Ayaha Nagata, Yoshiki Obinata
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/sample_emotion_analyzer.launch
-
- api_key [default: ]
- format [default: wave]
- launch/emotion_analyzer.launch
-
- input_audio [default: /audio/audio]
- api_key [default: ]
Messages
Services
Plugins
Recent questions tagged emotion_analyzer at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 2.1.30 |
License | BSD |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/jsk-ros-pkg/jsk_3rdparty.git |
VCS Type | git |
VCS Version | master |
Last Updated | 2025-05-10 |
Dev Status | DEVELOPED |
CI status | No Continuous Integration |
Released | RELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Kei Okada
- Yoshiki Obinata
Authors
- Ayaha Nagata
Emotion Analyzer Service using Hume API (ROS1)
This ROS1 package provides a service to analyze emotions from a given text using the Hume AI API.
Requirements
- ROS1 Noetic
- Python 3.8+
- An API key from Hume AI
Installation
Clone this repository and move to this directory
rosdep install -iry --from-paths .
catkin build --this
then source your workspace
Usage (Quick)
Using your microphone
roslaunch emotion_analyzer sample_emotion_analyzer.launch api_key:=<your_api_key>
Usage
1. Launch Emotion_Analyzer
roslaunch emotion_analyzer emotion_analyzer.launch api_key:=<your_api_key>
2. Call the service
For text,
rosservice call /analyze_text "text: '<text you want to analyze>'"
For prepared audio (up to 5 seconds),
rosservice call /analyze_audio "audio_file: <audio_file_path>"
As a sample, you can use '/home/leus/ros/catkin_ws/src/jsk_3rdparty/emotion_analyzer/data/purugacha_short.wav'
as
For audio from microphone,
roslaunch audio_capture capture.launch format:=wave
rosservice call /analyze_audio "audio_file: ''"
You can check the device information by arecord -l
.
Sometimes you need to replace “hw” with “plughw”:
for example, roslaunch audio_capture capture.launch format:=wave device:=plughw:1,0
.
When the device is busy, you can try fuser -v /dev/snd/*
to get PID and kill it by kill -9 <PID>
.
Changelog for package emotion_analyzer
2.1.30 (2025-05-10)
- Add Emotion Analyzer (#527)
- Contributors: Ayaha Nagata, Yoshiki Obinata
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/sample_emotion_analyzer.launch
-
- api_key [default: ]
- format [default: wave]
- launch/emotion_analyzer.launch
-
- input_audio [default: /audio/audio]
- api_key [default: ]
Messages
Services
Plugins
Recent questions tagged emotion_analyzer at Robotics Stack Exchange
Package Summary
Tags | No category tags. |
Version | 2.1.30 |
License | BSD |
Build type | CATKIN |
Use | RECOMMENDED |
Repository Summary
Checkout URI | https://github.com/jsk-ros-pkg/jsk_3rdparty.git |
VCS Type | git |
VCS Version | master |
Last Updated | 2025-05-10 |
Dev Status | DEVELOPED |
CI status | No Continuous Integration |
Released | RELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Package Description
Additional Links
Maintainers
- Kei Okada
- Yoshiki Obinata
Authors
- Ayaha Nagata
Emotion Analyzer Service using Hume API (ROS1)
This ROS1 package provides a service to analyze emotions from a given text using the Hume AI API.
Requirements
- ROS1 Noetic
- Python 3.8+
- An API key from Hume AI
Installation
Clone this repository and move to this directory
rosdep install -iry --from-paths .
catkin build --this
then source your workspace
Usage (Quick)
Using your microphone
roslaunch emotion_analyzer sample_emotion_analyzer.launch api_key:=<your_api_key>
Usage
1. Launch Emotion_Analyzer
roslaunch emotion_analyzer emotion_analyzer.launch api_key:=<your_api_key>
2. Call the service
For text,
rosservice call /analyze_text "text: '<text you want to analyze>'"
For prepared audio (up to 5 seconds),
rosservice call /analyze_audio "audio_file: <audio_file_path>"
As a sample, you can use '/home/leus/ros/catkin_ws/src/jsk_3rdparty/emotion_analyzer/data/purugacha_short.wav'
as
For audio from microphone,
roslaunch audio_capture capture.launch format:=wave
rosservice call /analyze_audio "audio_file: ''"
You can check the device information by arecord -l
.
Sometimes you need to replace “hw” with “plughw”:
for example, roslaunch audio_capture capture.launch format:=wave device:=plughw:1,0
.
When the device is busy, you can try fuser -v /dev/snd/*
to get PID and kill it by kill -9 <PID>
.
Changelog for package emotion_analyzer
2.1.30 (2025-05-10)
- Add Emotion Analyzer (#527)
- Contributors: Ayaha Nagata, Yoshiki Obinata
Wiki Tutorials
Package Dependencies
System Dependencies
Dependant Packages
Launch files
- launch/sample_emotion_analyzer.launch
-
- api_key [default: ]
- format [default: wave]
- launch/emotion_analyzer.launch
-
- input_audio [default: /audio/audio]
- api_key [default: ]