Saturday, December 27, 2014



Hehehehe got an arduino kit for Christmas. Made a little rainbow LED :) 
(ignore the TV in the background please) 


Saturday, December 20, 2014

Updates?

Joined Robotics club about a month ago at my school! So far I've played with arduinos and ultrasonic sensors to write a program to make a LED light up when there is nothing within a foot in front of the sensor. I've also learned the power of relays and how to solder! The latter was very fun and I felt super cool. We just soldered some wires together but it was still fun.
I did the one on the left :D

MAN ROBOTICS IS FUN 

Also, a friend and I have come up with a plan to make a little robot that will travel around the sidewalks of our school and sweep up trash. We're very very excited about this :) 



Thursday, September 18, 2014

Thursday, September 11, 2014

Updates on the computer science-y stuff in my life:


  • Taking AP Computer Science! We're learning Java using Greenfoot and I started making Pong, the game, using Greenfoot.
My version involves teddy bears as paddles...
I made it into a two player game and now I'm working on making it 1 player with the other paddle being an AI paddle. 

Video of 2-player game: 
  • Also taking Crpytography I with Stanford Online. I finished Stanford Online's Computer Science 101 course this summer. Cryptography is waaaayyy harder. It's cool though
  • Started learning how to make websites using XHTML and CSS from a 4 hour long YouTube video. ITS SO FUN. AND THE WEBSITE ACTUALLY DOES STUFF TOO. Like links and "jumping" to sections and email stuff!





Thursday, August 14, 2014

InternetProviders.com Scholarship Essay

“Just Google it.” The number of times I have heard this at school is innumerable. Since its popularization in the 1990s, the Internet has become an increasingly essential part of students’ education, nearly surpassing textbooks. This giant conglomeration of everything from scientific research papers to pictures of cuddly koala bears is practically my private tutor.
Last summer I visited one of the world’s most beautiful cities of art –Paris. The Louvre, Centre Pompidou, Musée d’Orsay, and Rodin museum…my eyes were ready to melt from the exposure to so many original pieces from history’s most celebrated artists. I took it all in for those five treasured days in the city of light, and when I returned home, I could not bear for the experience to slip from my fingers (and mind) so quickly. When I found out my school did not offer AP Art History, I turned to the Internet, enrolled myself in an online course and flooded my brain with knowledge. Thanks to the Internet, my experience in Paris was enriched as the oeuvres I had only appreciated aesthetically gained a new level of meaning.
This summer, I undertook a research project with Florida State University. The goal of the project was essentially to make a mind-controlled computer game. As if that was not challenging enough, I also had to learn to program. The number of days I had to complete the project? 12 days. Within 12 days I learned to program using C# and to build games using Unity 3D. My best friend during the process was Google. Every time I ran into errors in my program, I would turn to programming forums and online tutorials. My mentor explained to me that programming is no longer about memorizing functions or syntax –all of that can be easily accessed on the Internet. This good news was accompanied by a sigh of relief from me; “Thank you, Tim Berners-Lee,” I reminisced.
We finished the project in time. A computer-game controlled by the mind. I presented my work at a poster session at FSU, but also to the entire world via the Internet. Throughout the project, I had been posting updates, pictures and videos to my blog. When the project was finally completed, the first thing I did was post a video of me using it on YouTube. “Get your work out there, make yourself known,” encouraged my mentor. The Internet has become a crucial part in how people get job and research offers. The cost is negligible, yet it reaps boundless benefits.
When I returned home, I could not wait to continue expanding my knowledge of computer science. I signed up for Computer Science 101 with Stanford Online. Face-to-face with me was a Stanford professor, yet we were located on opposite sides of the United States. I learned through uploaded videos all about software, image editing and computer security.
The Internet aids me on the daily level as well. Each evening, I download my notes for the next day from my teacher’s website, I dissect Wikipedia pages for guidance with my homework, I inquire about assignments on our class’s Facebook group page. At school, it continues. Updates about the status of swim practice are posted on our school’s website and I access my online calendar to check for appointments and meetings before I leave campus. During holidays, it continues. Holiday work is posted on our teachers’ websites and assignments are turned in via Edmodo. The Internet has become an integrated and integral factor in my school schedule, from beginning to end.
However, it is necessary to take a step back and realize that full dependence on any thing can become an impediment, turning even a blessing into a weakness. There have been a handful of times when the Internet has failed me and productivity comes to a crashing halt. Too much reliance on anything can become a handicap, or as the cliché goes, everything in moderation.


http://www.internetproviders.com/internet-scholarship.html
           


Monday, July 21, 2014

We made it! 
Poster practicing
Poster session at the Student Services Building on July 17th!

Wednesday, July 9, 2014

Lab 2: Sorting

For this lab, I created 3 cpp programs which sorted a list of random numbers from minimum to maximum, and then gave the user a choice on what kind of finished product they wanted to be printed.

The program involved mastering the 3 different sorting algorithms - bubble sort, selection sort, and insertion sort. I also learned how to set up a function which would take into account (using an "if" statement) what the user inputted, in order to decide what to output.

Creating the programs also involved importing the numbers into the array from another file.
For importing the numbers into the program, I was able to user what I learned form lab 1. However, it required a new function to put those numbers into an array.


Thursday, July 3, 2014

Screencast of new game (controlled using my mind) -7/3/14

http://youtu.be/Q8CaUX4OGdw

Successsssssss (or at least, some) - July 3, 2014

Using the EmoKey I am now successfully able to control the game using my mind. 

The EmoKey is a tool to map the EmoStates into keyboard inputs. 
It can use combinations of keys or any of the keys on the keyboard.

To set up the EmoKey, you have to create an EmoKey mapping. The EmoKey mapping has a couple parts to it. First, you should add the "rules". By setting up the rule you name the command and choose what keyboard input will correspond to your command. For example, if you want to replace the space bar in a game with input from the headset, you would add the spacebar as the "rule". The "behavior" of the key allows you to choose whether you want activating the key to hold down the key or not. The "connection" that goes with this "rule" is what you want the command to do. For example, if you want the spacebar to make the player jump, you would choose "lift" in the drop down menu. The mapping connects to the control panel rather than the neuroheadset or EmoEngine so that it is possible to set it up with different/certain users and to configure the settings before using it. The "value" of the connection allows you to control how easy it is to activate the key. A lower value would mean it takes less power to activate the command. 

A couple things I ran into when using the neuroheadset to control the game:
  • Sometimes the EPOC randomly crashes and I don't realize this while in the game, so I keep trying to move the block without realizing the signal has been lost. 
  • Setting up the input to replace the keyboard keys in the control panel instead of in the EmoKey does not give an option to save, while the EmoKey does. (Mapping will be lost when the control panel is closed)
  • Setting up the space bar to correspond to "lift" unintentionally triggered the fullscreen option in the Control Panel. 

Next, I redesigned level 2 of the game. Previously, when I reduced the game to only one command, forward, it was too easy to to win and it would only be a matter of time before you would get to the end of the game. 

In order to make the game more easily playable, I reduced the game to two commands, forward and up (or fly/lift). I changed the game so that there are now spaces in the floor plane. Since the player can only move forward, I also made it so that all the goal blocks that won the player points and the final goal block would be in a straight line. However, to make the game more interesting I incorporated the command jump by making it necessary to jump over the breaks in the game plane or else the player would lose. In addition, I moved the goal blocks up so that the player would have to jump to reach them as well. 

The game now tests the players ability to command 2 different commands at the same time, since it is necessary to not only move up but up and forwards. In addition, the player has to control how much they move forward to avoid falling into the next gap in the game plane in front of them. 
View of game from side
Game view

Finally, in order to make it easier to see where exactly in the game the player is (difficult originally to estimate depth because of the perspective), I attached the camera to the player so that the camera follower the player as it moved. 




Tuesday, July 1, 2014

Experimenting with Blender-July 1st, 2014

We originally thought we would have to redo the Unity game prototype using Blender and rewriting the scripts in Python, since the rest of the project, like the Cortical Learning Algorithm and the analysis on the brainwaves that Julia was doing, used Python and Linux. 

I made a snowman. :)
Extruding (shown above) is super fun. Pretty sure Unity doesn't have this. 

(Thanks YouTube user BornCG for the tutorials on how to use Blender)


However, we realized we were able to use the EmoKey to create new inputs for the commands that the keyboard arrows served previously. To make the game less difficult, I made it so only moving forward would be necessary to get through the game. As a result, now the final hurdle to finish the game was lowered so that jumping was not necessary and the blocks that earned the user points were placed in a straight line as opposed to all over the game plane which would have required left and right arrow keys. 

I am now able to move the cube in the game using my mind!


Using the Emotiv EPOC

Summarized thoughts on the Emotiv EPOC: 
"Wow wow super cool"

The EPOC had 2 reference sensors and 16 sensors that had to be hydrated before use. Once the sensors were hydrated and placed into the headset, we tried the headset out on AJ. The sensors had to be hydrated again since there was not enough saline solution the first time. The wireless signal according to the control panel was strong, although the user has to sit fairly close to the USB Dongle. After a bit of wiggling around of the sensors, we were able to get a good connection where nearly all of the sensors had a "green" (the best) connection. 

We started with training the EPOC to recognize facial expressions. 
The control panel had a little robot which would mimick your facial expressions as it recognized them. Among the facial expressions were laugh, smile, smirk (right and left), brow furrow, eyebrow raise, and blink. 

We next worked on Cognitiv training. We began by training Push. For training, the user has to concentrate on thinking of the command consistently for 8 seconds. After training is over, the user is prompted to either accept the training or discard it. It probably would have been better to first train neutral until the skill level for neutral was very high. On the other hand, we tried to add to many commands too quickly and began adding pull and left and right before the first commands were mastered, making controlling the block much more difficult. The more commands that have been added, the harder it is for the control panel to recognize each command. 

We also took a look at the Affectiv suite, which showed us our emotions during the training. It was interesting to see the places where we were frustrated, and the ones where we had a burst of excitement. 

After a couple days of playing around with the EPOC, we discovered a couple of things:
  • The control panel crashes after about 800 seconds of run time when used on Linux, but not on Windows
  • Training without using the animation helps the user realize whether or not the training was successful so that they can make a smart decision on whether or not to keep the training. When the animation is turned on, it is impossible to tell whether or not the block's movement was due to the user or the animation. If the block does not successfully complete the command that is being focused on during training, the user could/should choose to discard the training. 
  • Visualizing the command that is being trained helps the control panel understand the user's intentions. 
  • Thinking of a specific word like "nothing" and staring at a solid color such as the white background of a window helped me master "neutral."





Thursday, June 26, 2014

Screencast of the game - June 26, 2014

Link to Screencast of the game on June 26th 2014

Everything You've Ever Wanted to Know About the Emotiv Software Developer Kit (SDK)

Overview
The SDK is a toolset that allows the development of games or applications which use the Emotive neuroheadsets, The neuroheadset is a headset worn by the player/user which interprets brain signals and sends this information to the Emotive EmoEngine. The Emotiv EmoEngine translates the detection results into an EmoState, or a data structure containing information about the current state of all activate Emotiv detections. 

The Emotiv API (Application Programming Interface) will be useful to our project since it enables application developers to write software applications that work with the neuroheadsets and detection suites. 

The Suites:
Expressiv Suite: 
The Expressive Suite is responsible for handling the facial expressions of the player. Among the detected expression are blinking, winking, brow movement, and mouth movement (smile, smirk, laugh).
The sensitivity adjustment panel to the right of the Expressiv Suite panel allows the user to check the performance of the detection and adjust the sensitivity. If the expression is too easily triggered or "false positive" expressions are being detected, the sensitivity can be lowered.
Affectiv Suite:
The Affectiv Suite reports on the emotions experienced by the user and displays them in a real time graph.
The Affectiv Suite reports on engagement, boredom, frustration, meditation, and excitement (long and short term)
Cognitiv Suite:
Evaluates the player's brainwave activity to understand the user's intent to perform distinct physical actions on an object
The 13 actions are split into two groups-6 directional actions and 6 rotations plus disappear.
Cognitiv allows 4 of these actions to be chosen/recognized at a time, not including neutral. An action power is associated with each action as well.

Programming with the Emotiv SDK
Using the API to communicate with the EmoEngine:
Prior to calling Emotiv API functions and during intialization, the application must establish a connection to the EmoEngine by calling EE_EngineConnect or EE_EngineRemoteConnect
Use EE_EngineConnect to directly communicate with an Emotiv headset and EE_EngineRemoteConnect to communicate with SDKLite or to connect your application with EmoComposer or EmotiveControlPanel

EmoEngine publishes events that can be retrieved by the application by calling EE_EngineGetNextEvent( ). Most applications should poll for new EmoStates 10-15 times per second. (Can be done in main event loop or when other input devices are periodically queried).

To close the connection with the EmoEngine, call EE_EngineDisconnect ().

Categories of EmoEngine events:
-Hardware-related events: Events that communicate with users connect or disconnect Emotiv input devices to the computer (example: EE_UserAdded)
-New EmoState events - events that changed the user's facial, cognitive or emotional state
You can retrieve these by calling EE_EmoEngineEventGetEmoState ( )
-Suite specific events: Events relating to training and configuring Cognitiv and Expressive detection suites (example: EE_CognitiveEvent)

Most API functions return a value of type int. Most Emotive API functions return EDK_OK if they succeeded.


Connect to the EmoEngine:
Initialize the connection with the Emotiv EmoEngine by calling EE_EngineConnect ( ).

Buffer Creation:
Buffers temporarily use memory to store information while data is being transferred.
Buffer is created by using EE_EmoStateCreat ( ). Invoking EE_EngineGetNextEvent (), will get the current EmoEngine event type.
If result of getting the event type (EE_EmoEngineEventGetType ( )) is EE_EmoStateUpdated, there is a new detection event for a particular user. EE_EmoEngineEventGetEmoState () copies the EmoState information from event handle into EmoState buffer.
For example, ES_ExpressiveIsBlink (eState) could be used to access the blink detection.

EDK_NO_EVENT means that no new events have been published since the previous call.

EE_EmoStateFree ( ) / EE_EmoEngineEventFree ( ) can be used to free up memory allocated for EmoState buffer and EmoEngineEventHandle.

(I wonder if anyone's reading this still. If you are, congrats)

Cognitiv commands! (The fun part):
The user's conscious mental intention can be detected and control the movement of a 3D virtual object. The ouput of the Cognitiv detection indicates what the users are mentally engaged in at a certain time. These commands are then sent to a separate application called EmoCube which controls the movement of the 3D block.

Commands reach the EmoCube via a UDP network connection. The action/command is communicated as two comma-separated, ASCII formatted values. The first value is the action type and the second is the action power value, ES_CognitiveGetCurrentAction ( ) and ES_CognitiveGetCurrectActionPower ( ), respectively.

Above is an example of  calling the cognitive commands

Above picture shows how the cognitive training process works. 

Below is an example of extracting Cognitive specific event information from the EmoEngine event:

Cognitiv Training:
Before training an action, the action must be set using the API function, EE_CognitiveSetTrainingAction ( ). 
Examples of Cognitiv actions: COG_PUSH, COG_LIFT
If no action is chosen, neutral will be trained. 
To begin the training, use COG_START. If training is successfully started, an EE_CognitivTrainingStarted event will be sent. 
After 8 seconds of training, two events will be sent from the EmoEngine:
1. EE_CognitiveTrainingSucceeded: If the EEG signal was good enough during training to update the algorithm for the player's behavior/signature, the training will be updated.
2. EE_CognitiveTrainingFailed: If quality of EEG signal wasn't good enough, the process will restart and user will be asked to start the training again.




Wednesday, June 25, 2014

Scientific Computing Essentials - Lab 1

For our first Scientific computing essentials (SCE) assignment, we had to create a C++ program that would calculate the average, sum, minimum and maximum of a set of 920 random numbers we were given.

Creating the program involved utilizing the class fstream in order to read-in the numbers from the RandomNumber.tsv file. This was probably the hardest part of the project for me since it was something I'd never done anything like before.

We next worked on finding the sum of the numbers. At first, the function I created for adding the numbers was only adding the first number, so I had to create a while loop which would continue to add the numbers from the file as long as the numbers were still being read-in. 

Once we found the sum, we were able to find the average by simply dividing by 920, the number of random numbers in the file. This is a weakness of our program since it can only been applied to sets of 920 random numbers. 

We next found the minimum and maximum of our number set by using a function which first set an initial value for the maximum and then compared the subsequent numbers to the previously set value for the extreme, either minimum or maximum.  For example, to find the maximum we set our initial value as 0 since we knew we had many numbers that were greater than 0. This could be another weakness in our program since if the random number set's maximum is less than 0, the program would simply think the maximum was 0. We used the same idea to find the minimum, except our initial value was 1. Knowing we had many negative numbers, our minimum would surely be less than 1. However, this causes the same weakness we could potentially encounter with finding the maximum. 

Finally, we used cin and cout functions in order to print the sum, average, maximum and minimum.

Tuesday, June 24, 2014

Day 5 - June 24, 2014

Goal for the day: fix all the problems identified last week! 

The first thing I did today was to set the points back to 0 when the player dies. I also programmed the game to reset the points after the player completes the level. Therefore, the maximum number of points a player can score per game is 30.

Using a bool, I tackled the next of my problems and made the time stop when the player completed the game and had this time display on the pop-up screen at the end of the game.

Finally, I created a variable called finalScore (data type = float) which would be responsible for creating a finalScore out of the time elapsed in the 2nd level and the number of points scored. This involved transforming strings into floats and basic operators such as + and -.




Thursday, June 19, 2014

Screen cast of game prototype from 6-19-201

Link to a screen cast of the game prototype as of June 19th 


Day 4 -Setting up a scoring system

Created a goal at the end of the game plane which the player has to pass through to finish the level. At this time as well, the pop up window will appear that I set up previously.


Set up a scoring system where each block is worth 10 points. When the player block passes through them, the score total at the top left hand corner will update itself and the blocks that have been passed through are destroyed. 

I added the score gained by passing through the goals to the pop up window. 

Day 3 - Game Behavior


Day 3- June 17th 2014
Created a main menu with a GUI button that the player can click when he/she is ready to advance to the first level.


Changed the game behavior so that when the player block falls off of the game plane, instead of respawning back to its original position, the player loses and is taken back to a "die menu" which states that the player lost and has a GUI button for "Try Again" which will take the player to the first level. 



I also created a pop-up like window which will show up when the player block passes through a block which is the end of level 2. The pop up window shows how long the level took and how many points the player scored. 


In addition, I created a timing system at the top left corner which will reset for each level and play a part into calculating the player's score. (The faster they complete the level, the better). 

*This picture was taken on Day 4 after other aspects of the game were changed as well*

Friday, June 13, 2014

Day 2 - Creating a Game Prototype within Unity

Made it so that colliding with a box tagged goal would take the player to the next level. 

Also made it so that falling off the plane would take the player back to the starting position

Wrote a script so that when the player collides with certain blocks, it will win certain numbers of points. 

(Points show up on the bottom of the screen and are hard to notice though)



Got the player to be able to fly/jump (above picture)

Created  GUIText thing so that points would be more visible
Tried to write a script so that the point total would update itself, but:
Ended the day with all these fun errors ^ 

Tuesday, June 10, 2014

Day 1 - Moving a Block



Game Trial - June 10, 2014

Level 1 of the Earthquake game

-Unclear what to do once the blue and purple consoles are found
-Once the player falls into the water, it would be better for the game to end (take the player to a
game over" type of screen so they realize they can't swim in the water
-Better way of displaying the "find our friends and don't get wet" message
-putting the message on the people made it unclear that those people counted as the people that you had to find 
-adding a way of marking which people/consoles have been found, since they all are similar looking
-green symbols are confusing
-hard to understand what the symbols on the left side of the bottom right hand chart mean
-where's the math?