top of page

VR SCENT INTEGRATION

Anchor 1
PlayStation-VR-Motion.png

OBJECTIVE

 

The objective of this project was rather simple: achieving the integration of scent, and thus activating a user's olfactory system, in a virtual environment. 

​

This project was designed to tackle a major problem in VR - virtual reality sickness. Usage of VR can induce systems of nausea and motion sickness in around 15 minutes. While searching for solutions, I found a research paper which stated that by integrating another one of the 5 senses(except visual and auditory), symptoms of VR sickness could be greatly reduced. Thus, I started to develop a device that can integrate scent in a VR environment 

EARLY PROTOTYPING

 

Some of my first prototypes were based on a very simple mechanism. I first inserted a few vials of scents into a plastic case that could be strapped onto the bottom of a VR HMD, so that it would be directly in front of the user's nose. Then, using Arduino, I could either (1) randomly or (2) manually control the emission of the scents. 

​

The pictures below are some of the very first models I created. 

3.jpg
1.jpg
2.jpg
maxresdefault.jpg

DIVING DEEPER

After watching the documentary Meeting You, about a mother who was able to reunite with her deceased daughter via VR, I decided it was time to crank the technological level of my project up by one notch. The first task laid ahead was the control of scent emission: I needed the computer to actually recognize what the user was viewing, and emit scents accordingly, and the only realistic option I believe I had was image recognition. At this time of development, I had been relatively new to Python. I did have some background in Java and C, but some research quickly told me that the best way to implement image recognition software into my device would be python. And thus the journey began. 

STEP 1: IMAGE RECOGNITION

I was not familiar nor comfortable with Python. As I said above, I had just started coding in Python, and even the shallow knowledge I had stemmed from the background I had in Java. So I knew this would be a challenge. 

​

It's hard to explain exactly how I did it, because when I code I tend to 'zone out' and lose track of what's happening. But eventually, after searching and going over many tutorials and resources, I came up with a solution. It went as follows:

1. Using Keras and TensorFlow, set up a CNN.

2. Manually scavenged approximately 40000 images of the following:

Apples, Oranges, Lemons, Peaches, Lavender, Mint, Coconut, Bananas, Ocean, and White Walls(control), 4000 each. 

​

I then converted each image with the following steps

1. Compress each image into 28 x 28 pixels

2. Extract the RGB values for each pixel

3. Create a Numpy Array which contains (1) the position of the pixel and (2) the RGB values

​

Finally, I fed the array of each photo into CNN. Then, I set up a simple code where the user can insert a input (as a photo) for the CNN to run through and recognize the items in the photo. After teaching my computer around 40000 images via Numpy Arrays, it was able to distinguish the 10 different objects to some extent (around 80% accuracy). I then saved it as a .h5 file, and went on to the next step.

 

If anyone would like to reference the code, I have attached a link to the right.

화면 캡처 2020-10-24 133458.jpg

​

There are three files in the google drive folder

​

1. DBDGKQ.PY: This is responsible for converiting an image of any resolution into a 28 8 28 and converting the pixel RGB values into a Numpy Array. You will have to manually insert the directory to the file where your images are contained.

​

2. MY_MODEL.H5: This is the pre-trained image recognition knoweldge(?) that can be inserted into NEON.PY is who are willing to distinugh the same objects as I did.

 

3. NEON.PY: This is responsible for setting up the CNN network and teching your computer how to recognize images

​

화면 캡처 2020-10-24 135420.jpg

STEP 2: PYTHON-ARDUINO

The next problem I encountered was that my actual device was controlled by an Arduino Uno board, not Python. Thus I had to communicate to my Ardunio board via Python. I initially thought this would be hard, but it turned out to be easier than I thought. 

​

By importing the Serial library in my Python project, I could communicate with the Arduino board via the Arduino serial port. Basically, whenever the image recognition classifier detected an object, it would relay a specific alphabet assigned to each object. For example, if it detected apples, it would relay 'A' to the Ardunio serial port, which the Arduino board could use. 

STEP 3: ARDUINO

Now all was left as to refine the device scent-emission mechanism. I changed scents from vials to solidified ones, and instead of opening and closing a vial, I switched to a fan mechanism I invented. 

 

Instead of vials, I placed solidified scents in small containers of 0.02 x 0.02 x 0.1 cm plastic boxes with either end open. Then, on one end I put a fan, and the fan would constantly blowing the scent away from the user's nose. When the Arduino received a signal from the Python that it detected an object, for example, an apple, the fan in front of the apple would stop running, allowing the scent to flow to the user's nose. 
 

122535229_2506620949634938_1675226852471
화면 캡처 2020-10-24 135912.jpg

STEP 4: RESEARCH

With the being accomplished, I was now ready to move on to actually using it for research purposes. 

bottom of page