Overview
The SDK is a toolset that allows the development of games or applications which use the Emotive neuroheadsets, The neuroheadset is a headset worn by the player/user which interprets brain signals and sends this information to the Emotive EmoEngine. The Emotiv EmoEngine translates the detection results into an EmoState, or a data structure containing information about the current state of all activate Emotiv detections.
The Emotiv API (Application Programming Interface) will be useful to our project since it enables application developers to write software applications that work with the neuroheadsets and detection suites.
The Suites:
Expressiv Suite:
The Expressive Suite is responsible for handling the facial expressions of the player. Among the detected expression are blinking, winking, brow movement, and mouth movement (smile, smirk, laugh).
The sensitivity adjustment panel to the right of the Expressiv Suite panel allows the user to check the performance of the detection and adjust the sensitivity. If the expression is too easily triggered or "false positive" expressions are being detected, the sensitivity can be lowered.
Affectiv Suite:
The Affectiv Suite reports on the emotions experienced by the user and displays them in a real time graph.
The Affectiv Suite reports on engagement, boredom, frustration, meditation, and excitement (long and short term)
Cognitiv Suite:
Evaluates the player's brainwave activity to understand the user's intent to perform distinct physical actions on an object
The 13 actions are split into two groups-6 directional actions and 6 rotations plus disappear.
Cognitiv allows 4 of these actions to be chosen/recognized at a time, not including neutral. An action power is associated with each action as well.
Programming with the Emotiv SDK
Using the API to communicate with the EmoEngine:
Prior to calling Emotiv API functions and during intialization, the application must establish a connection to the EmoEngine by calling EE_EngineConnect or EE_EngineRemoteConnect
Use EE_EngineConnect to directly communicate with an Emotiv headset and EE_EngineRemoteConnect to communicate with SDKLite or to connect your application with EmoComposer or EmotiveControlPanel
EmoEngine publishes events that can be retrieved by the application by calling EE_EngineGetNextEvent( ). Most applications should poll for new EmoStates 10-15 times per second. (Can be done in main event loop or when other input devices are periodically queried).
To close the connection with the EmoEngine, call EE_EngineDisconnect ().
Categories of EmoEngine events:
-Hardware-related events: Events that communicate with users connect or disconnect Emotiv input devices to the computer (example: EE_UserAdded)
-New EmoState events - events that changed the user's facial, cognitive or emotional state
You can retrieve these by calling EE_EmoEngineEventGetEmoState ( )
-Suite specific events: Events relating to training and configuring Cognitiv and Expressive detection suites (example: EE_CognitiveEvent)
Most API functions return a value of type int. Most Emotive API functions return EDK_OK if they succeeded.
Connect to the EmoEngine:
Initialize the connection with the Emotiv EmoEngine by calling EE_EngineConnect ( ).
Buffer Creation:
Buffers temporarily use memory to store information while data is being transferred.
Buffer is created by using EE_EmoStateCreat ( ). Invoking EE_EngineGetNextEvent (), will get the current EmoEngine event type.
If result of getting the event type (EE_EmoEngineEventGetType ( )) is EE_EmoStateUpdated, there is a new detection event for a particular user. EE_EmoEngineEventGetEmoState () copies the EmoState information from event handle into EmoState buffer.
For example, ES_ExpressiveIsBlink (eState) could be used to access the blink detection.
EDK_NO_EVENT means that no new events have been published since the previous call.
EE_EmoStateFree ( ) / EE_EmoEngineEventFree ( ) can be used to free up memory allocated for EmoState buffer and EmoEngineEventHandle.
(I wonder if anyone's reading this still. If you are, congrats)
Cognitiv commands! (The fun part):
The user's conscious mental intention can be detected and control the movement of a 3D virtual object. The ouput of the Cognitiv detection indicates what the users are mentally engaged in at a certain time. These commands are then sent to a separate application called EmoCube which controls the movement of the 3D block.
Commands reach the EmoCube via a UDP network connection. The action/command is communicated as two comma-separated, ASCII formatted values. The first value is the action type and the second is the action power value, ES_CognitiveGetCurrentAction ( ) and ES_CognitiveGetCurrectActionPower ( ), respectively.
Above is an example of calling the cognitive commands
Above picture shows how the cognitive training process works.
Below is an example of extracting Cognitive specific event information from the EmoEngine event:
Cognitiv Training:
Before training an action, the action must be set using the API function, EE_CognitiveSetTrainingAction ( ).
Examples of Cognitiv actions: COG_PUSH, COG_LIFT
If no action is chosen, neutral will be trained.
To begin the training, use COG_START. If training is successfully started, an EE_CognitivTrainingStarted event will be sent.
After 8 seconds of training, two events will be sent from the EmoEngine:
1. EE_CognitiveTrainingSucceeded: If the EEG signal was good enough during training to update the algorithm for the player's behavior/signature, the training will be updated.
2. EE_CognitiveTrainingFailed: If quality of EEG signal wasn't good enough, the process will restart and user will be asked to start the training again.




No comments:
Post a Comment