CECS 490B – Midterm 2

CECS 490B – Midterm Report 2

Due : 3/23/16

SERVOARM

Skyler Tran

Victor Espinoza

Michael Parra

Jose Trejo (Not Participating)



Introduction:

Hardware:

  1. Overview on hardware design
  2. Theory
  3. Block Diagram

Block Diagram Descriptions:

  1. Schematics

Custom FPGA for camera – (S1):

Dual Power supply (5V and 12V) – (S2):

Arduino controlling for servos and LED and speaker – (S3):

Software:

  1. Flow Charts of Software

Top-Level Software Flowchart:

Overall Flowchart Description:

Flowchart:

Block Descriptions:

Look For Face Flowchart:

Overall Flowchart Description:

Flowchart:

Block Descriptions:

Move Servos Flowchart:

Overall Flowchart Description:

Flowchart:

Block Descriptions:

Follow Face Flowchart:

Overall Flowchart Description:

Flowchart:

Block Descriptions:

LED/Speaker Flowchart:

Overall Flowchart Description:

Flowchart:

Block Descriptions:

  1. Source code listings:

Construction:

  1. Completed Task List
  2. Layouts of custom circuit board
  3. Prototyping Boards Used
  4. Constructed boxes, structures, etc…
  5. Task to be Completed:

Introduction:

To the naked eye the Desk-Buddy may appear to be just a normal lamp, but it is something a lot more unique. The Desk-Buddy is actually not a lamp at all; it is an interactable robotic arm equipped with a camera that can detect a person’s face via facial detection. The Desk-Buddy has four main segments that are all controlled by servo motors. The servos are strategically placed on the different segments of the Desk-Buddy in order to allow it to move up and down and left to right. There are also servos attached to the segment that houses the camera, which allows it to move the camera freely (in the up, down, left, and right directions). This allows the Desk-Buddy to move around and look for people to interact with. The Desk-Buddy also includes changing light functionality in order to help it exhibit different emotions (happy, sad, mad). One such example would be if the Desk-Buddy is not able to find anybody to interact with, which would result in it changing its lights to a blue color to signify sadness. To further help the Desk-Buddy convey emotions, it also has a speaker attached to it which allows it to make certain noises based on the emotion that that it is trying to convey.

The camera will take pictures and process the images at a frame rate of 3 frames per second. It will save the images to memory and process them using a facial detection algorithm (we decided to use the Kovac Model and YCrCb Skin color equations for this project). Once the image is processed, the results will then determine whether we should update the servos, LEDs, and speaker noise.

The Desk-Buddy is strictly made for the entertainment of the user so that they can have a fun device to interact with. The final product with everything connected to it will weigh less than 12 pounds.

Hardware:

1. Overview on hardware design

The hardware that we are designing for this lab consists of a dual power supply, a custom PCB board that will use FPGA to drive our camera interface, a LED Driver for our RGB LED strip, and an amplifier circuit for our speaker.

2. Theory

LED darlington transistor calculations:

We must run 12V to power the RGB LEDs. Each segment of the SMD5050 RGB strip consists of 3 LEDs. Each segment string of LEDs draws approximately 20 milliAmps from the 12V supply. This means that there is a maximum of 20mA draw from the red LEDs, 20mA draw from the green LEDs, and 20mA from the blue LEDs for each segment. If we have the LED strip on full white (all LEDs are lit) that would be 60mA per segment.

Our SMD5050 RGB LED strip contains 30/LEDs per meter strip (there are 10 segments per meter). To find the total maximum current draw per meter, we would multiply 60mA x 10 (ten segments per meter for the 30/LED per meter strip) = 0.6 Amps per meter. This is assuming, however that we have all of the LEDs on at once (which displays a white color) which are receiving power from a 12V supply. We are going to be using less than a meter of the RGB LED strip, however, so we are estimating that we will need to drive about 0.4 Amps to the LEDs if we have them all on.

 

Speaker Amplification: We must create an amplifier circuit for our 8ohm speaker. This amplifier will deliver the sound to the speaker. It will increase the sound and clear up the sound as well. We will be using the LM386 in order to accomplish this. We are currently still looking at different circuits that involve the LM386 as it seems to be the best choice thus far.

 

3. Block Diagram

Block Diagram Descriptions:

 

Dual Port Power Supply: (Port1: 5V 4A, Port2: 12V 4A): The power supply provides the power for all of the components in the circuit. For our design we will use the Spartan-6 FPGA. The Spartan-6 FPGA will draw approximately 200 mA. The 5 servos draw a total of 2.5 A; each draws 500 mA. We have 10-20 5mm RGB LEDs that will draw 200 mA; an LED draws 20 mA at 2V. The camera will draw about 20 mA of current. The total current needed for our design is less than 4 A, but we need to make sure that we are providing sufficient current to each component, meaning that we want to produce more current than needed. As such, we decided to have our power supply create 4 A.

 

Components Power
Spartan-6 FPGA 200 mA
Arduino Uno 200 mA
Speaker 200 mA
Servos (5) 2.5 A
LEDs (10 – 20) 12V 200 mA
Camera 20 mA
Total: 5V 3.12 A; 12V 200 mA

 

Power Switch: The power switch will turn on and off for the whole circuit. On the power switch, we might have to add a capacitor to prevent the spark and to smooth out the electricity. The reason the power switch may have the spark is because as the switch is closed the electricity begins to jump over the switch from a terminal to other terminal. The wire and switch are made of different materials, so that makes the electricity unbalanced the electricity runs through the switch when the switch is closed.

 

3.3V Converter: The 3.3 volt converter converts the 5V power supply to 3.3V for use with the camera, because it requires the use of 2.5 – 3.3V . On this 3.3V converter, we are going to have a step down voltage regulator and a couple of capacitors from the input and output to smooth out the DC voltage coming out of the converter. Also, the 3.3 volt is connected to FPGA. This voltage uses for voltage level shifter for GPIO pins to be 0 as 0V and 1 as 3.3V.

 

1.2V Converter: The 1.2V volt converter converts 3.3V to 1.2V for the use of FPGA core in order to operate the chip. All Xilinx family uses 1.2V as FPGA core.

 

Custom FPGA: We going to use the Spartan-6 FPGA in our design. We chose the Spartan-6 because we believe that it will be capable of efficiently processing the images that it receives from the camera in order to determine whether there is a person’s face in that frame or not.

 

Speaker: The speaker will output robotic sounds in order to communicate with the user. It can output high and low pitches which result in different groups of sounds. For example, it will output a high pitch at a fast frequency to express a happy mood. In the sad mood, the speaker will output a low pitch at a slow frequency to mimic a sad noise.

 

Speaker Driver: This is needed to amplify our speaker output.

 

Servos: There are 5 servos in our design. Each servo controls a different position on the arm. We have a servo controlling the base of the arm which will turn from left to right (Servo_1). There is another servo controlling the base appendage of the Desk Buddy, allowing it go up and down (Servo_2). The third servo helps the Desk Buddy lean forwards and backwards (Servo_3). The fourth servo helps the camera to move up and down (Servo_4). The fifth servo helps the camera to move left and right (Servo_5).

 

RGB LED Strip: We plan to use RGB LEDs for this project so that we can alter the color being displayed on the LEDs in order to simulate different moods. The way that RGB LEDs work is that all of the colors available come from a combination of different values for the Red, Green, and Blue colors. Each of these three color values is altered using Pulse Width Modulation techniques to output different colors.

 

LED Driver: In order to drive the LEDs we need to be able to safely connect them in a way where they will not burn out the pins that they are connected to. This is going to be done by connecting each of the R, G, and B pins on the LED strip to darlington transistors so that we can safely sink the current flowing through the LEDs to ground. This is done by connecting each of the R, G, B pins to the collector of our transistors. We then connect the bases to ground and the emitters go to our FPGA board.

 

Camera: We are going to use a camera that is compatible with a SPI communication for face detection. The camera will take an image size of 160 x 120 and send the data to our FPGA and also store it into flash memory. This is done so that we can perform facial detection  processing on the  image. The desired frame rate for the camera will be 30 frames per second.

 

PROM-M25P80: This serial flash memory will store the bitstream file and is in charge of loading the program to the FPGA when it is powered up. The memory will communicate with the FPGA via a SPI interface.

 

Arduino: The Arduino will be the master which controls LEDs, speaker, servos, and the FPGA.

 

4. Schematics

Custom FPGA for camera – (S1):

 

The Spartan 6 will be controlling only camera which processing face detection. The customized Spartan 6 PCB size will be 40×40 mm. We want to design the PCB small, so it can be attached to the camera. This board will be the slave. It will tell the master controller whether it detects the face. If it does not detect the face, the master controller will control the servos and look for the face until the slave tells the face is detected.

 

The firmware will be coded in verilog.

 

Dual Power supply (5V and 12V) – (S2):

The dual power supply will output 5V and 12V, 48W max. The power provides to the Spartan 6, Arduino, servos, and LEDs.

 

Arduino controlling for servos and LED and speaker – (S3):

 

The Arduino is our main board which is the master. It will control servos, LED, and initiates the data of the slave Spartan 6. Initiating the Spartan 6 that whether it detects a face.

 

The firmware will be coded in C++.

Software:

1. Flow Charts of Software

Top-Level Software Flowchart:

Overall Flowchart Description:

This is a top-level Flowchart representation of our software. As long as power is provided to the Desk-Buddy and it is turned on, it will constantly look for a person’s face by moving its servos around. Once it finds a person’s face, the Desk-Buddy then changes its mood (state) and displays this by changing its LED color and outputting a specific sound. If a face was previously detected but the Desk-Buddy no longer detects that person’s face, it will then enter the mad state for up to 10 seconds. If the Desk-Buddy recognizes a face before the 10 second mad state window is up, then it will immediately change to a happy state and disregard anything that it was previously doing in the mad state. If the Desk-Buddy stays in the mad state for 10 seconds, it will then switch to its sad state for 5 seconds. Again, if it detects a face before this time window is achieved, the Desk-Buddy will automatically disregard its current state and enter into a happy state. Once the Desk-Buddy has been sad for 5 seconds, it will then transition to the idle state, where it will stay at indefinitely until it detects a face. For the sake of not cluttering the Top-Level Flowchart, we decided to make sub-flowcharts for the Move Servos, Look for Face, Follow Face, and Output Noise/LED color blocks that are referenced in this flowchart. Each of those sub-flowcharts has its own description that describes the data flow through that specific flowchart. Each block in this flowchart also has a block number associated with it. In the Block Descriptions section under this flowchart we discuss the purpose of each block and what they do.

Flowchart:
Block Descriptions:

Begin: This block simply notifies the user that the data flow has begun in the flowchart (its purpose is for the reader and it does not have any relevance in our software design). That is why we did not assign it a number.

  1. Initialize All Servos (all Servos moved to 5 degree starting point): This block initializes all of the servos on startup and moves each of the five servos in our design to a starting point of 5 degrees. This block only occurs during the startup of the system and its purpose is to initialize all of the servos to a known state.
  2. Initialize Camera (have it to start storing / processing data to detect a face): This block serves the purpose of signaling the camera to take a picture and and process the image to see if a face was detected on the image or not. This block calls the Look For Face Sub-Flowchart.
  3. Initialize LEDs (have them change to gray color, which represents the Idle state): This block sets the LEDs to gray, which signifies that the Desk-Buddy is in the idle state. It serves the purpose of setting the LEDs on the Desk-Buddy to represent that it is in the idle state.
  4. State = 0; (idle state): This block serves the purpose of setting the Desk-Buddy to a known state upon startup (the idle state). The state of the Desk-Buddy is used to determine what LED color should be displayed and what output noise should be made according to what state it is in.
  5. Reset / Start Timer: The purpose of this block is to start and reset the timer associated with the idle state. Initially, the Desk-Buddy will look try to detect a face for 15 seconds, and if it does not detect a face within this time window, then it will go to the sad state.
  6. Initialize speaker output (make a boot-up noise to notify user that robotic arm is powered on): This block serves the purpose of outputting a boot-up noise to the speaker on the Desk-Buddy so that the user can know that it has been turned on.
  7. Face Detected?: This block is a decision block that checks to see if a face was detected or not based on the output of the Look For Face Flowchart.
  8. if (State == 1): This block follows the true path of block #6. It is another decision that checks to see if the Desk-Buddy is currently in the happy state (state 1) when it entered this path.
  9. State = 1; (happy state): This block follows the false path of block #7. If it was not in the happy state, then we change it to the happy state by changing the value of State to a 1.
  10. Output LEDS and Noise: We then need to change the colors on the LEDs to green to signify that the Desk-Buddy is in a happy state. We also need to output a happy noise to the Desk-Buddy’s speaker. These are the functions that this block is in charge of. It calls the Output LEDs and Noise Sub-Flowchart to achieve these tasks.
  11. Follow Face: This block gets executed by both the true and false paths of block #7. The purpose of this block is to call the Follow Face Sub-Flowchart. This Sub-Flowchart will take a picture and try to detect a person’s face on it. If it does detect that person’s face, then it will update the servos accordingly in order to try and get that person’s face to be in the middle of the image.
  12.  if (State  == 1): This block follows the false path of block #6. It is another decision that checks to see if the Desk-Buddy is currently in the happy state (state 1) when it entered this path.
  13. State = 2; (mad state): This block follows the true path of block #11. If a face was not detected and the Desk-Buddy is currently in the happy state, then we change its State value to a 2 (mad state). This signifies that the Desk-Buddy is mad that it could not detect a face all of the sudden (as stated before, the Desk-Buddy is very needy!).
  14. Output LEDS and Noise: We then need to change the colors on the LEDs to red to signify that the Desk-Buddy is in a mad state. We also need to output a mad noise to the Desk-Buddy’s speaker. These are the functions that this block is in charge of. It calls the Output LEDs and Noise Sub-Flowchart to achieve these tasks.
  15. Reset / Start Timer: This block resets the timer and starts it off fresh from 0. This timer is needed because we decided that the Desk-Buddy will only stay in its mad state for a finite amount of time (10 seconds to be exact).
  16. Get Elapsed Time Value: This block is responsible for retrieving the current amount of time that has elapsed. We need this value to make sure that the Desk-Buddy moves onto the next state after 10 seconds have passed. If the Desk-Buddy detects a face before these 10 seconds pass, then it will automatically enter the happy state and disregard the previous state and elapsed time in that state (It is very naive and trusting).
  17. if (ElapsedTime >= 10s): This block is a decision block that checks to see if the Desk-Buddy has reached the time limit for its mad state (10 seconds).
  18. State = 3; (sad state): This block follows the true path of block #16. If the Desk-Buddy has reached its time limit, we then move onto the next state by changing the State value to a 3 (sad state).
  19. Output LEDS and Noise: We then need to change the colors on the LEDs to blue to signify that the Desk-Buddy is in a sad state. We also need to output a sad noise to the Desk-Buddy’s speaker. These are the functions that this block is in charge of. It calls the Output LEDs and Noise Sub-Flowchart to achieve these tasks.
  20. Reset / Start Timer: This block resets the timer and starts it off fresh from 0. This timer is needed because we decided that the Desk-Buddy will only stay in its sad state for a finite amount of time (5 seconds to be exact).
  21. Get Elapsed Time Value: This block is responsible for retrieving the current amount of time that has elapsed. We need this value to make sure that the Desk-Buddy moves onto the next state after 5 seconds have passed. If the Desk-Buddy detects a face before these 5 seconds pass, then it will automatically enter the happy state and disregard the previous state and elapsed time in that state (It is very naive and trusting).
  22. if (ElapsedTime >= 5s): This block is a decision block that checks to see if the Desk-Buddy has reached the time limit for its sad state (5 seconds).
  23. Stop/Disable Timer: This block is responsible for stopping and disabling the timer so that it does not run while the the Desk-Buddy is stuck doing nothing in the idle state.
  24. Output LEDS and Noise: We then need to change the colors on the LEDs to gray to signify that the Desk-Buddy has entered its indefinite idle state. We also need to output an idle noise to the Desk-Buddy’s speaker. These are the functions that this block is in charge of. It calls the Output LEDs and Noise Sub-Flowchart to achieve these tasks.
  25. State = 0; (indefinite idle state): We then move onto the next state by changing the State value to a 0 (indefinite idle state). That is what this block is responsible for doing. When the Desk-Buddy enters the indefinite idle state, it does nothing. because it would be extremely inefficient to have the Desk-Buddy always moving. The Desk-Buddy enters this indefinite idle state after looking for a person’s face for 15 seconds and not being able to detect any faces.
  26. Input push button status: This block is responsible for getting the push-button status for the indefinite idle state. We figured that using a push button to break the Desk-Buddy out of the indefinite idle state was the easiest way to get the Desk-Buddy to change its’ state.
  27. if (push button asserted): This is a decision block that checks to see if the push button associated with getting the Desk-Buddy out of its indeterminate idle state was pushed or not. If it was, then we exit the indeterminate idle state, otherwise we loop back to block #25.
  28. Reset / Start Timer: Now that the Desk-Buddy is out of the indeterminate idle state, we need to re-enable and restart the timer (to look for a face for another 15 seconds before the Desk-Buddy re-enters the indeterminate idle state).
  29. else if (State == 2): This block follows the false path of block #11. This is a decision block that checks to see if the Desk-Buddy is currently in the mad state (State == 2).
  30. else if (State == 3): This block follows the false path of block #28. This is a decision block that checks to see if the Desk-Buddy is currently in the sad state (State == 3).
  31. Get Elapsed Time Value: This block follows the false path of block #29. If the Desk-Buddy was neither in State 1, State 2, or State 3, that means it is in the idle state. This block is responsible for seeing how much time has elapsed since the Desk-Buddy has been in the idle state.
  32. if (ElapsedTime >= 10s): This is a decision block that checks to see if the Desk-Buddy has reached its limit to search for a face while in the idle state. If it has, then the Desk-Buddy moves into a sad state (State = 3).
  33. Move Servos: This block is responsible for updating the servo positions. This block calls the Move Servos Sub-Flowchart to update the servo positions. How the Move Servos Sub-Flowchart works is that it will follow a pattern and move only certain servos at a time so that it can cover the most surface area while taking pictures and processing them to see if a face was detected or not.
  34. Look for face: This block is responsible for telling the camera to take a picture and processing that picture to see if a face was detected inside of it or not.This is achieved by calling the Look For Face Sub-Flowchart.

Look For Face Flowchart:

Overall Flowchart Description:

The Face detection flow chart uses Kovac Models and YCrCb equations for Skin Detection. Each block in this flowchart also has a block number associated with it. In the Block Descriptions section under this flowchart we discuss the purpose of each block and what they do.

Flowchart:

 

Block Descriptions:

Begin: This block simply notifies the user that the data flow has begun in the flowchart (its purpose is for the reader and it does not have any relevance in our software design). That is why we did not assign it a number.

  1. Signal Camera to Take picture: This block is responsible for getting the camera to take a picture. Once the camera takes a picture, the picture data is then saved to memory in the next step.
  2. Transmit Data image from camera to memory: Now that we have image data, we then save it to memory so that we can process the image to see if there is a face detected on the image or not. This block is responsible for making sure that the image data is correctly saved to the Micron memory within the Nexys2 FPGA board.
  3. Perform Kovac Models:  This block is responsible for performing the Kovac Models on the image.
  4. Perform YCrCb Conversion and Skin Equations: This block is responsible for performing the RGB to YCrCb Conversion and then using the Skin equations to detect skin on the image.
  5. Combine Results: This block is responsible for combining the previous results together.
  6. See if skin was detected: This block is responsible for checking to see if the face was detected or not.
  7. Locate Face coordinates on image: If a face is detected, then this block is responsible for returning the coordinates on the image where the face was detected. For the sake of simplicity, we decided that the coordinate that will be returned will be the center point in-between both of the eyes on a face. In this way, we will be able to get the coordinates of roughly the center of the face.
  8. Return Results (was face actually detected or not): This block is in charge of returning the actual results of the Viola Jones algorithm. It returns the final verdict on whether a face was located on the image or not. This block also returns the coordinates of the face (block #6) if the face was detected.

End: This block simply notifies the user that the data flow has ended in the flowchart (its purpose is for the reader and it does not have any relevance in our software design). That is why we did not assign it a number.

Move Servos Flowchart:

Overall Flowchart Description:

This flowchart controls the process of moving the servos to the appropriate location. The servo movement follows a specific pattern : move the base servo all the way left/right and adjust the camera left/right servo accordingly, one the base servo has reached the maximum angle, increment the other remaining servos and reverse the direction that the base servo moves (right/left). Repeat this process until the other servos reach their maximum angle degrees and then reverse the direction (start decrementing them). This way, the Desk-Buddy is bound to detect a person’s face because it is covering the whole range of servo movements that it can withstand. If the Desk-Buddy does detect a face, it will then start using the Follow Face servo flowchart instead of this flowchart (which removes the inefficiency of checking every available servo range combination in this flowchart)

We start this flowchart off by determining whether the base servo (servo_1) has moved all the way left or all the way right. If it has, then we increment or decrement (depending on the current position of each servo) servo_2, servo_3, and Servo_4. These three servos are only incremented/decremented after servo_1 has made a complete rotation from left to right or vise-versa. We always increment/decrement servo_1 and servo_5 (the servo controlling the left and right movement of the camera). Each pass through this flowchart will result in either only servo_1 and servo_5 updating their positions or it will result in all of the servos updating their position. Each update to the different servo positions is achieved using 5° increments/decrements.

For the purpose of this flow chart, we included the maximum degree length to be 180°. In reality, however, not all of the servos will be moving up to 180° (each servo will only move up to a certain angles depending on which segment of the arm it is located on). For example, the servo at the base of the arm would move at 0 – 120 degrees, the servo at the arm segment would move at 0 – 100 degrees, the servo at the arm shade would move from 0 – 60 degrees, and the servo inside the shade helping the camera to move left or right would move from 0 – 60 degrees. These angle ranges are not final yet and will be adjusted accordingly once we are able to determine our final servo movement ranges. Some servos will not need to move as far as others and so we need to make sure that each servo is able to move to its maximum angle range. These final ranges can only be confirmed once we  hook the servos up to the Desk-Buddy.

Each block in this flowchart also has a block number associated with it. In the Block Descriptions section under this flowchart we discuss the purpose of each block and what they do.

Flowchart:

 

Block Descriptions:

*Note: There are 5 servos in our design. Each servo controls a different position on the arm. We have a servo controlling the base of the arm which will turn from left to right (Servo_1). There is another servo controlling the base appendage of the Desk Buddy, allowing it go up and down (Servo_2). The third servo helps the Desk Buddy lean forwards and backwards (Servo_3). The fourth servo helps the camera to move up and down (Servo_4). The fifth servo helps the camera to move left and right (Servo_5).

Begin: This block simply notifies the user that the data flow has begun in the flowchart (its purpose is for the reader and it does not have any relevance in our software design). That is why we did not assign it a number.

  1. s1Inc0Dec1= 0: This block is responsible for initializing the increment/decrement value for servo 1 (this value determines whether we are incrementing the servo (moving it to the left or up) or decrementing the servo (moving it to the right or down)). This block only gets executed once upon startup and is then ignored every time after that when this Sub-Flowchart is called upon. A zero value results in incrementing the servo position and a one value results in decrementing the servo position.
  2. s2Inc0Dec1= 0: This block is responsible for initializing the increment/decrement value for servo 2 (this value determines whether we are incrementing the servo (moving it to the left or up) or decrementing the servo (moving it to the right or down)). This block only gets executed once upon startup and is then ignored every time after that when this Sub-Flowchart is called upon. A zero value results in incrementing the servo position and a one value results in decrementing the servo position.
  3. s3Inc0Dec1= 0: This block is responsible for initializing the increment/decrement value for servo 3 (this value determines whether we are incrementing the servo (moving it to the left or up) or decrementing the servo (moving it to the right or down)). This block only gets executed once upon startup and is then ignored every time after that when this Sub-Flowchart is called upon. A zero value results in incrementing the servo position and a one value results in decrementing the servo position.
  4. s4Inc0Dec1= 0: This block is responsible for initializing the increment/decrement value for servo 4 (this value determines whether we are incrementing the servo (moving it to the left or up) or decrementing the servo (moving it to the right or down)). This block only gets executed once upon startup and is then ignored every time after that when this Sub-Flowchart is called upon. A zero value results in incrementing the servo position and a one value results in decrementing the servo position.
  5. s5Inc0Dec1= 0: This block is responsible for initializing the increment/decrement value for servo 5 (this value determines whether we are incrementing the servo (moving it to the left or up) or decrementing the servo (moving it to the right or down)). This block only gets executed once upon startup and is then ignored every time after that when this Sub-Flowchart is called upon. A zero value results in incrementing the servo position and a one value results in decrementing the servo position.
  6. retrieve Servo_1 position from memory: In order to keep track of the position of servo 1, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_1.
  7. Get Servo_1 sInc0Dec1 value: This block is responsible for retrieving the current  sInc0Dec1 value for servo 1. We need to know this value to know if we are incrementing or decrementing Servo_1.
  8. if (Servo_1 >= 180 Degrees) || (Servo_1 <= 0 Degrees): This is a decision block that tests whether servo 1 has reached either of its extreme value limits. If it has, that means that it cannot travel any further in that particular direction.
  9. s1Inc0Dec1 = ! s1Inc0Dec1: This block follows the true path of block #7. If servo 1 has reached its limit to travel in a specific direction, then we need to flip the polarity of s1Inc0Dec1. This allows Servo_1 to start traveling in the opposite direction. That is what this block is responsible for.
  10. retrieve Servo_2 position from memory: In order to keep track of the position of servo 2, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_2.
  11. if (Servo_2 >= 90  Degrees) || (Servo_2 <= 0 Degrees): This is a decision block that tests whether servo 2 has reached either of its extreme value limits. If it has, that means that it cannot travel any further in that particular direction.
  12. s2Inc0Dec1 = ! s2Inc0Dec1: This block follows the true path of block #10. If servo 2 has reached its limit to travel in a specific direction, then we need to flip the polarity of s2Inc0Dec1. This allows Servo_2 to start traveling in the opposite direction. That is what this block is responsible for.
  13. Get Servo_2 s2Inc0Dec1 value: This block is responsible for retrieving the current  sInc0Dec1 value for servo 2. We need to know this value to know if we are incrementing or decrementing Servo_2. This block gets executed by both the true and false paths of block #10.
  14. if (s2Inc0Dec1 == 0): This is a decision block that tests whether the s2Inc0Dec1 value is 0. If it is, that means that we should be incrementing the servo, otherwise we should be decrementing it.
  15. Move Servo_2 5 degrees up: This block follows the true path of block #13. If the s2Inc0Dec1 value is zero, we then move Servo_2 five degrees up. This block is responsible for incrementing the servo position of Servo_2.
  16. Move Servo_2 5 degrees down: This block follows the false path of block #13. If the s2Inc0Dec1 value is a one, we then move Servo_2 five degrees down. This block is responsible for decrementing the servo position of Servo_2.
  17. Save Servo_2 position to memory: Once we update the position of Servo_2, it becomes very important to keep track of the current position of Servo_2. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  18. retrieve servo_3 position from memory: In order to keep track of the position of servo 3, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_3.
  19. if (Servo_3 >= 180 Degrees) || (Servo_3 <= 0 Degrees): This is a decision block that tests whether servo 3 has reached either of its extreme value limits. If it has, that means that it cannot travel any further in that particular direction.
  20. s3Inc0Dec1 = ! s3Inc0Dec1: This block follows the true path of block #18. If servo 3 has reached its limit to travel in a specific direction, then we need to flip the polarity of s3Inc0Dec1. This allows Servo_3 to start traveling in the opposite direction. That is what this block is responsible for.
  21. Get Servo_3 s5Inc0Dec1 value: This block is responsible for retrieving the current  sInc0Dec1 value for servo 3. We need to know this value to know if we are incrementing or decrementing Servo_3. This block gets executed by both the true and false paths of block #18.
  22. if (s3Inc0Dec1 == 0): This is a decision block that tests whether the s3Inc0Dec1 value is 0. If it is, that means that we should be incrementing the servo, otherwise we should be decrementing it.
  23. Move Servo_3 5 degrees backwards (up): This block follows the true path of block #21. If the s3Inc0Dec1 value is zero, we then move Servo_3 five degrees backwards (up). This block is responsible for incrementing the servo position of Servo_3.
  24. Move Servo_3 5 degrees forwards (down): This block follows the false path of block #21. If the s3Inc0Dec1 value is a one, we then move Servo_3 five degrees forwards (down). This block is responsible for decrementing the servo position of Servo_3.
  25. Save Servo_3 position to memory: Once we update the position of Servo_3, it becomes very important to keep track of the current position of Servo_3. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  26. retrieve servo_4 position from memory: In order to keep track of the position of servo 4, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_4.
  27. if (Servo_4 >= 90 Degrees) || (Servo_4 <= 0 Degrees): This is a decision block that tests whether servo 4 has reached either of its extreme value limits. If it has, that means that it cannot travel any further in that particular direction.
  28. s4Inc0Dec1 = ! s4Inc0Dec1: This block follows the true path of block #26. If servo 4 has reached its limit to travel in a specific direction, then we need to flip the polarity of  s4Inc0Dec1. This allows Servo_4 to start traveling in the opposite direction. That is what this block is responsible for.
  29. Get Servo_4 s5Inc0Dec1 value: This block is responsible for retrieving the current  sInc0Dec1 value for servo 4. We need to know this value to know if we are incrementing or decrementing Servo_4. This block gets executed by both the true and false paths of block #26.
  30. if (s4Inc0Dec1 == 0): This is a decision block that tests whether the s4Inc0Dec1 value is 0. If it is, that means that we should be incrementing the servo, otherwise we should be decrementing it.
  31. Move Servo_4 5 degrees up: This block follows the true path of block #29. If the s4Inc0Dec1 value is zero, we then move Servo_4 five degrees up. This block is responsible for incrementing the servo position of Servo_4. Once again, we have to save the Servo_4 position to memory after we update it.
  32. Move Servo_4 5 degrees down: This block follows the false path of block #29. If the s4Inc0Dec1 value is a one, we then move Servo_4 five degrees down. This block is responsible for decrementing the servo position of Servo_4. Once again, we have to save the Servo_4 position to memory after we update it.
  33. Get Servo_1 s1Inc0Dec1 value: This block is responsible for retrieving the current  sInc0Dec1 value for servo 1. We need to know this value to know if we are incrementing or decrementing Servo_1. This block gets executed by both the true and false paths of block #20. It also gets executed by the false path of block #7.
  34. if (s1Inc0Dec1 == 0): This is a decision block that tests whether the s1Inc0Dec1 value is 0. If it is, that means that we should be incrementing the servo, otherwise we should be decrementing it.
  35. Move Servo_1 5 degrees to the left: This block follows the true path of block #33. If the s1Inc0Dec1 value is zero, we then move Servo_1 five degrees to the left. This block is responsible for incrementing the servo position of Servo_1.
  36. Move Servo_1 5 degrees to the right: This block follows the false path of block #33. If the s1Inc0Dec1 value is a one, we then move Servo_1 five degrees down. This block is responsible for decrementing the servo position of Servo_1.
  37. Save Servo_1 position to memory:  Once we update the position of Servo_1, it becomes very important to keep track of the current position of Servo_1. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  38. retrieve servo_5 position from memory: In order to keep track of the position of servo 5, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_5.
  39. if (Servo_5 >= 180 Degrees) || (Servo_5 <= 0 Degrees): This is a decision block that tests whether servo 5 has reached either of its extreme value limits. If it has, that means that it cannot travel any further in that particular direction.
  40. s5Inc0Dec1 = ! s5Inc0Dec1: This block follows the true path of block #38. If servo 5 has reached its limit to travel in a specific direction, then we need to flip the polarity of  s5Inc0Dec1. This allows Servo_5 to start traveling in the opposite direction. That is what this block is responsible for.
  41. Get Servo_5 s5Inc0Dec1 value: This block is responsible for retrieving the current  sInc0Dec1 value for servo 5. We need to know this value to know if we are incrementing or decrementing Servo_5. This block gets executed by both the true and false paths of block #38.
  42. if (s5Inc0Dec1 == 0): This is a decision block that tests whether the s5Inc0Dec1 value is 0. If it is, that means that we should be incrementing the servo, otherwise we should be decrementing it.
  43. Move Servo_5 5 degrees to the left: This block follows the true path of block #41. If the s5Inc0Dec1 value is zero, we then move Servo_5 five degrees to the left. This block is responsible for incrementing the servo position of Servo_5.
  44. Move Servo_5 5 degrees to the right: This block follows the false path of block #41. If the s5Inc0Dec1 value is a one, we then move Servo_5 five degrees to the right. This block is responsible for decrementing the servo position of Servo_5.
  45. Save Servo_5 position to memory: Once we update the position of Servo_5, it becomes very important to keep track of the current position of Servo_5. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.

End: This block simply notifies the user that the data flow has ended in the flowchart (its purpose is for the reader and it does not have any relevance in our software design). That is why we did not assign it a number.

Follow Face Flowchart:

Overall Flowchart Description:

Once our Desk-Buddy detects a face, then it will do its best to follow that person’s face wherever it goes. This flowchart describes the logic behind this process. The Desk-Buddy will always try to keep a person’s face in the center of the images that it is taking. This means that if a person’s face is detected in the middle of the image, then the Desk-Buddy will not move. If the person’s face is detected to the left of the Desk-Buddy, it will then move its base servo to the left and move the camera left/right servo to the right (to recenter the camera). If a person’s face is detected to the right of the Desk-Buddy, it will then move its base servo to the right and move the camera left/right servo to the left. The other four places that a face can be detected by the Desk-Buddy are in the upper-right portion of the image, the upper-left portion of the image, the lower-right portion of the image, and in the lower-left portion of the image. If the Desk-Buddy detects a face in one of these portions of the image, it will then adjust the appropriate servos so that the person’s face will be as close to the middle of the camera image as possible. This process, of course, only happens when the Desk-Buddy actually detects a face. If this is not the case, then the Desk-Buddy will move around in the defined pattern using the Move Servos Flowchart to look around for a person’s face. Each block in this flowchart also has a block number associated with it. In the Block Descriptions section under this flowchart we discuss the purpose of each block and what they do.

Flowchart:
Block Descriptions:

*Block Descriptions Comments:

*Note: There are 5 servos in our design. Each servo controls a different position on the arm. We have a servo controlling the base of the arm which will turn from left to right (Servo_1). There is another servo controlling the base appendage of the Desk Buddy, allowing it go up and down (Servo_2). The third servo helps the Desk Buddy lean forwards and backwards (Servo_3). The fourth servo helps the camera to move up and down (Servo_4). The fifth servo helps the camera to move left and right (Servo_5).

*          middle up

*_________ |__________

*|  upper left  |  upper right  |

*| middle left  | middle right |  
*|  lower left  |  lower right  |

*                   |

*          middle down

*In this flowchart we look for a face and determine where on the image that face was detected (if a face was indeed detected on the image).  If a face is detected, it will always be within one of the eight different quadrants. Depending on which quadrant the face was detected to be in, the Desk Buddy will the appropriate servos to try to get the person’s face to show up in the middle of the the image. It will do this by updating a combination of servo_1, servo_2, and servo_5.

*servo_1 is updated in order to move the Desk Buddy base from left to right and vice versa.

*servo_2 is updated in order to move the Desk Buddy arm from up to down and vice versa.

*servo_5 is updated in order to move the Desk Buddy camera from left to right and vice versa.

*Note: We move servo_1 and servo_5 in opposite directions so that the camera remains centered with respect to the picture (if we move servo_1 5 degrees to the right we want to move servo_5 5 degrees to the left to balance out the picture, otherwise the picture would be completely off).

 

Begin: This block simply notifies the user that the data flow has begun in the flowchart (its purpose is for the reader and it does not have any relevance in our software design). That is why we did not assign it a number.

  1. Look For Face: This block is responsible for telling the camera to take a picture and processing that picture to see if a face was detected inside of it or not.This is achieved by calling the Look For Face Sub-Flowchart.
  2. Face Detected?: This block is a decision block that checks to see if a face was detected or not based on the output of the Look For Face Flowchart.
  3. Retrieve faceLocation on image: This block follows the true path of block #2. It is responsible for retrieving the coordinates of a person’s face so that we know which quadrant their face was located in and we could adjust the servos accordingly to try and get that person’s face centered on the image.
  4. If (faceLocation == upper right): This is a decision block that checks to see if the person’s face coordinates were located on the upper right portion of the image. Look at the comments above for a better understanding of where each quadrant is with respect to all of the other quadrants. If the faceLocation is in the upper right quadrant of the image, that means that we want to move Servo_1 5 degrees to the right, Servo_2 5 degrees up, and Servo_5 5 degrees to the left. This will move the servos in a way where the upper right quadrant becomes closer to the center of the image.
  5. retrieve Servo_1 position from memory: In order to keep track of the position of servo 1, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_1.
  6. If (Servo_1 > 5 degrees): This is a decision block that checks to make sure that the Servo_1 right limit has not been reached. This ensures us that we can still move Servo_1 to the right.
  7. Move Servo_1 5 degrees to the right: This block follows the true path of block #6. If the right limit for Servo_1 has not been reached, we then move Servo_1 five degrees to the right. This block is responsible for incrementing the servo position of Servo_1.
  8. save Servo_1 position to memory: Once we update the position of Servo_1, it becomes very important to keep track of the current position of Servo_1. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  9. retrieve Servo_2 position from memory: In order to keep track of the position of servo 2, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_2
  10. If (Servo_2 < 175 degrees): This is a decision block that checks to make sure that the Servo_2 up limit has not been reached. This ensures us that we can still move Servo_2 up.
  11. Move Servo_2 5 degrees up: This block follows the true path of block #10. If the up limit for Servo_2 has not been reached, we then move Servo_2 five degrees up. This block is responsible for incrementing the servo position of Servo_2.
  12. save Servo_2 position to memory: Once we update the position of Servo_2, it becomes very important to keep track of the current position of Servo_2. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  13. retrieve Servo_5 position from memory: In order to keep track of the position of servo 5, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_5.
  14. If (Servo_5 < 175 degrees): This is a decision block that checks to make sure that the Servo_5 left limit has not been reached. This ensures us that we can still move Servo_5 to the left.
  15. Move Servo_5 5 degrees to the left: This block follows the true path of block #14. If the Servo_5 left limit has not been reached, we then move Servo_5 five degrees to the left. This block is responsible for incrementing the servo position of Servo_5.
  16. save Servo_5 position to memory: Once we update the position of Servo_5, it becomes very important to keep track of the current position of Servo_5. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  17. If (faceLocation == upper left): This is a decision block that checks to see if the person’s face coordinates were located on the upper left portion of the image. Look at the comments above for a better understanding of where each quadrant is with respect to all of the other quadrants. If the faceLocation is in the upper leftt quadrant of the image, that means that we want to move Servo_1 5 degrees to the left, Servo_2 5 degrees up, and Servo_5 5 degrees to the right. This will move the servos in a way where the upper left quadrant becomes closer to the center of the image.
  18. retrieve Servo_1 position from memory: In order to keep track of the position of servo 1, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_1.
  19. If (Servo_1 < 175 degrees): This is a decision block that checks to make sure that the Servo_1 left limit has not been reached. This ensures us that we can still move Servo_1 to the left.
  20. Move Servo_1 5 degrees to the left: This block follows the true path of block #19. If the Servo_1 left limit has not been reached, we then move Servo_1 five degrees to the left. This block is responsible for incrementing the servo position of Servo_1.
  21. save Servo_1 position to memory: Once we update the position of Servo_1, it becomes very important to keep track of the current position of Servo_1. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  22. retrieve Servo_2 position from memory: In order to keep track of the position of servo 2, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_2
  23. If (Servo_2 < 175 degrees): This is a decision block that checks to make sure that the Servo_2 up limit has not been reached. This ensures us that we can still move Servo_2 up.
  24. Move Servo_2 5 degrees up: This block follows the true path of block #23. If the up limit for Servo_2 has not been reached, we then move Servo_2 five degrees up. This block is responsible for incrementing the servo position of Servo_2.
  25. save Servo_2 position to memory: Once we update the position of Servo_2, it becomes very important to keep track of the current position of Servo_2. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  26. retrieve Servo_5 position from memory: In order to keep track of the position of servo 5, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_5.
  27. If (Servo_5 > 5 degrees): This is a decision block that checks to make sure that the Servo_5 right limit has not been reached. This ensures us that we can still move Servo_5 to the right.
  28. Move Servo_5 5 degrees to the right: This block follows the true path of block #27. If the right limit for Servo_5 has not been reached, we then move Servo_5 five degrees to the right. This block is responsible for incrementing the servo position of Servo_5.
  29. save Servo_5 position to memory: Once we update the position of Servo_5, it becomes very important to keep track of the current position of Servo_5. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  30. If (faceLocation == middle left): This is a decision block that checks to see if the person’s face coordinates were located on the middle left portion of the image. Look at the comments above for a better understanding of where each quadrant is with respect to all of the other quadrants. If the faceLocation is in the middle left quadrant of the image, that means that we want to move Servo_1 5 degrees to the left and Servo_5 5 degrees to the right. This will move the servos in a way where the middle left quadrant becomes closer to the center of the image.
  31. retrieve Servo_1 position from memory: In order to keep track of the position of servo 1, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_1.
  32. If (Servo_1 < 175 degrees): This is a decision block that checks to make sure that the Servo_1 left limit has not been reached. This ensures us that we can still move Servo_1 to the left.
  33. Move Servo_1 5 degrees to the left: This block follows the true path of block #32. If the Servo_1 left limit has not been reached, we then move Servo_1 five degrees to the left. This block is responsible for incrementing the servo position of Servo_1.
  34. save Servo_1 position to memory: Once we update the position of Servo_1, it becomes very important to keep track of the current position of Servo_1. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  35. retrieve Servo_5 position from memory: In order to keep track of the position of servo 5, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_5.
  36. If (Servo_5 > 5 degrees): This is a decision block that checks to make sure that the Servo_5 right limit has not been reached. This ensures us that we can still move Servo_5 to the right.
  37. Move Servo_5 5 degrees to the right: This block follows the true path of block #36. If the right limit for Servo_5 has not been reached, we then move Servo_5 five degrees to the right. This block is responsible for incrementing the servo position of Servo_5.
  38. save Servo_5 position to memory: Once we update the position of Servo_5, it becomes very important to keep track of the current position of Servo_5. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  39. If (faceLocation == middle right): This is a decision block that checks to see if the person’s face coordinates were located on the middle right portion of the image. Look at the comments above for a better understanding of where each quadrant is with respect to all of the other quadrants. If the faceLocation is in the middle right quadrant of the image, that means that we want to move Servo_1 5 degrees to the right and Servo_5 5 degrees to the left. This will move the servos in a way where the middle right quadrant becomes closer to the center of the image.
  40. retrieve Servo_1 position from memory: In order to keep track of the position of servo 1, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_1.
  41. If (Servo_1 > 5 degrees): This is a decision block that checks to make sure that the Servo_1 right limit has not been reached. This ensures us that we can still move Servo_1 to the right.
  42. Move Servo_1 5 degrees to the right: This block follows the true path of block #41. If the right limit for Servo_1 has not been reached, we then move Servo_1 five degrees to the right. This block is responsible for incrementing the servo position of Servo_1.
  43. save Servo_1 position to memory: Once we update the position of Servo_1, it becomes very important to keep track of the current position of Servo_1. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  44. retrieve Servo_5 position from memory: In order to keep track of the position of servo 5, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_5.
  45. If (Servo_5 < 175 degrees): This is a decision block that checks to make sure that the Servo_5 left limit has not been reached. This ensures us that we can still move Servo_5 to the left.
  46. Move Servo_5 5 degrees to the left: This block follows the true path of block #45. If the Servo_5 left limit has not been reached, we then move Servo_5 five degrees to the left. This block is responsible for incrementing the servo position of Servo_5.
  47. save Servo_5 position to memory: Once we update the position of Servo_5, it becomes very important to keep track of the current position of Servo_5. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  48. If (faceLocation == lower right): This is a decision block that checks to see if the person’s face coordinates were located on the lower right portion of the image. Look at the comments above for a better understanding of where each quadrant is with respect to all of the other quadrants. If the faceLocation is in the lower right quadrant of the image, that means that we want to move Servo_1 5 degrees to the right, Servo_2 5 degrees down, and Servo_5 5 degrees to the left. This will move the servos in a way where the lower right quadrant becomes closer to the center of the image.
  49. retrieve Servo_1 position from memory: In order to keep track of the position of servo 1, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_1.
  50. If (Servo_1 > 5 degrees): This is a decision block that checks to make sure that the Servo_1 right limit has not been reached. This ensures us that we can still move Servo_1 to the right.
  51. Move Servo_1 5 degrees to the right: This block follows the true path of block #50. If the right limit for Servo_1 has not been reached, we then move Servo_1 five degrees to the right. This block is responsible for incrementing the servo position of Servo_1.
  52. save Servo_1 position to memory: Once we update the position of Servo_1, it becomes very important to keep track of the current position of Servo_1. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  53. retrieve Servo_2 position from memory: In order to keep track of the position of servo 2, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_2
  54. If (Servo_2 > 5 degrees): This is a decision block that checks to make sure that the Servo_2 down limit has not been reached. This ensures us that we can still move Servo_2 down.
  55. Move Servo_2 5 degrees down: This block follows the true path of block #54. If the s2Inc0Dec1 value is a one, we then move Servo_2 five degrees down. This block is responsible for decrementing the servo position of Servo_2.
  56. save Servo_2 position to memory: Once we update the position of Servo_2, it becomes very important to keep track of the current position of Servo_2. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  57. retrieve Servo_5 position from memory: In order to keep track of the position of servo 5, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_5.
  58. If (Servo_5 < 175 degrees): This is a decision block that checks to make sure that the Servo_5 left limit has not been reached. This ensures us that we can still move Servo_5 to the left.
  59. Move Servo_5 5 degrees to the left: This block follows the true path of block #58. If the Servo_5 left limit has not been reached, we then move Servo_5 five degrees to the left. This block is responsible for incrementing the servo position of Servo_5.
  60. save Servo_5 position to memory: Once we update the position of Servo_5, it becomes very important to keep track of the current position of Servo_5. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  61. If (faceLocation == lower left): This is a decision block that checks to see if the person’s face coordinates were located on the lower left portion of the image. Look at the comments above for a better understanding of where each quadrant is with respect to all of the other quadrants. If the faceLocation is in the lower left quadrant of the image, that means that we want to move Servo_1 5 degrees to the leftt, Servo_2 5 degrees down, and Servo_5 5 degrees to the right. This will move the servos in a way where the lower right quadrant becomes closer to the center of the image.
  62. retrieve Servo_1 position from memory: In order to keep track of the position of servo 1, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_1.
  63. If (Servo_1 < 175 degrees): This is a decision block that checks to make sure that the Servo_1 left limit has not been reached. This ensures us that we can still move Servo_1 to the left.
  64. Move Servo_1 5 degrees to the left: This block follows the true path of block #63. If the Servo_1 left limit has not been reached, we then move Servo_1 five degrees to the left. This block is responsible for incrementing the servo position of Servo_1.
  65. save Servo_1 position to memory: Once we update the position of Servo_1, it becomes very important to keep track of the current position of Servo_1. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  66. retrieve Servo_2 position from memory: In order to keep track of the position of servo 2, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_2
  67. If (Servo_2 > 5 degrees): This is a decision block that checks to make sure that the Servo_2 down limit has not been reached. This ensures us that we can still move Servo_2 down.
  68. Move Servo_2 5 degrees down: This block follows the true path of block #67. If the s2Inc0Dec1 value is a one, we then move Servo_2 five degrees down. This block is responsible for decrementing the servo position of Servo_2.
  69. save Servo_2 position to memory: Once we update the position of Servo_2, it becomes very important to keep track of the current position of Servo_2. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  70. retrieve Servo_5 position from memory: In order to keep track of the position of servo 5, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_5.
  71. If (Servo_5 > 5 degrees): This is a decision block that checks to make sure that the Servo_5 right limit has not been reached. This ensures us that we can still move Servo_5 to the right.
  72. Move Servo_5 5 degrees to the right: This block follows the true path of block #71. If the right limit for Servo_5 has not been reached, we then move Servo_5 five degrees to the right. This block is responsible for incrementing the servo position of Servo_5.
  73. save Servo_5 position to memory: Once we update the position of Servo_5, it becomes very important to keep track of the current position of Servo_5. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  74. If (faceLocation == middle up): This is a decision block that checks to see if the person’s face coordinates were located on the middle up portion of the image. Look at the comments above for a better understanding of where each quadrant is with respect to all of the other quadrants. If the faceLocation is in the middle up quadrant of the image, that means that we want to move Servo_2 5 degrees up. This will move the servos in a way where the middle up quadrant becomes closer to the center of the image.
  75. retrieve Servo_2 position from memory: In order to keep track of the position of servo 2, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_2
  76. If (Servo_2 < 175 degrees): This is a decision block that checks to make sure that the Servo_2 up limit has not been reached. This ensures us that we can still move Servo_2 up.
  77. Move Servo_2 5 degrees up: This block follows the true path of block #76. If the up limit for Servo_2 has not been reached, we then move Servo_2 five degrees up. This block is responsible for incrementing the servo position of Servo_2.
  78. save Servo_2 position to memory: Once we update the position of Servo_2, it becomes very important to keep track of the current position of Servo_2. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  79. If (faceLocation == middle down): This is a decision block that checks to see if the person’s face coordinates were located on the middle down portion of the image. Look at the comments above for a better understanding of where each quadrant is with respect to all of the other quadrants. If the faceLocation is in the middle down quadrant of the image, that means that we want to move Servo_2 5 degrees down. This will move the servos in a way where the middle down quadrant becomes closer to the center of the image.
  80. retrieve Servo_2 position from memory: In order to keep track of the position of servo 2, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_2
  81. If (Servo_2 > 5 degrees): This is a decision block that checks to make sure that the Servo_2 down limit has not been reached. This ensures us that we can still move Servo_2 down.
  82. Move Servo_2 5 degrees down: This block follows the true path of block #81. If the s2Inc0Dec1 value is a one, we then move Servo_2 five degrees down. This block is responsible for decrementing the servo position of Servo_2.
  83. save Servo_2 position to memory: Once we update the position of Servo_2, it becomes very important to keep track of the current position of Servo_2. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.

End: This block simply notifies the user that the data flow has ended in the flowchart (its purpose is for the reader and it does not have any relevance in our software design). That is why we did not assign it a number.

LED/Speaker Flowchart:

Overall Flowchart Description:

This flowchart corresponds to the LED color change and Speaker output. Both the LED lights and the speaker have corresponding states that go with each “personality” or “mood”. The flowchart goes through and finds what “mood” or state the Desk-Buddy is in. Once it determines the correct mood  it will then  light up the correct RGB LED combination and produce the corresponding sound output. It then will wait until the Desk Buddy switches to another state to change the LEDs and Sound. We do this because we only want the Desk-Buddy to output the appropriate noise and change its LED color once every time it enters the appropriate state. The flowchart starts off by assigning the correct startup sound and the LEDs to the correct startup color (gray). Then it will check to see if the Desk-Buddy is happy and if it is in its happy state, then it will produce a happy sound and turn the LEDs green to indicate happiness. The light will then stay green until the mood or state changes. It will do the same for the mad state producing a mad sound and red LEDs to indicate being mad. The last mood we will have is sadness. In this state, the Desk-Buddy will produce a sad noise along with changing the LEDs to blue. If the design doesn’t detect a face for a certain amount of time it will then enter an idle state which will cause the design to produce a noise to indicate it going idle along with gray LEDs. Each block in this flowchart also has a block number associated with it. In the Block Descriptions section under this flowchart we discuss the purpose of each block and what they do.

Flowchart:
Block Descriptions:

Begin: This block simply notifies the user that the data flow has begun in the flowchart (its purpose is for the reader and it does not have any relevance in our software design). That is why we did not assign it a number.

  1. Input State (from top level flowchart): This block is responsible for getting the input State from the Top-Level Software flowchart. This State variable is used to see what color to change the LEDs and what specific noise to make.
  2. if (State ==0): This decision block checks to see if the State is equal to zero (if the system is in the idle state).
  3. Select proper R, G, B values for LEDs (Gray color): This block follows the true path of block #1. It is responsible for selecting the correct RGB combination to produce a gray color (the color associated with the idle state).
  4. Output Gray color on LEDs: This block is responsible for updating all of the LEDs in the design and making sure that all of them are gray.
  5. Select appropriate sound (Idle sound): This block is responsible for selecting the appropriate sound that signifies that the Desk-Buddy is in the idle state.
  6. Output Idle Sound to speaker: This block is responsible for outputting the selected sound to the speaker connected to the Desk-Buddy
  7. if (State ==1): This block follows the false path of block #1. It is a decision block that checks to see if the State is equal to one (if the system is in the happy state).
  8. Select proper R, G, B values for LEDs (Green Color): This block follows the true path of block #6. It is responsible for selecting the correct RGB combination to produce a green color (the color associated with the happy state).
  9. Output Green color on LEDs: This block is responsible for updating all of the LEDs in the design and making sure that all of them are green.
  10. Select appropriate sound (Happy sound): This block is responsible for selecting the appropriate sound that signifies that the Desk-Buddy is in the happy state.
  11. Output Happy Sound to speaker: This block is responsible for outputting the selected sound to the speaker connected to the Desk-Buddy
  12. if (State ==2): This block follows the false path of block #6. It is a decision block that checks to see if the State is equal to two (if the system is in the mad state).
  13. Select proper R, G, B values for LEDs (Red Color): This block follows the true path of block #11. It is responsible for selecting the correct RGB combination to produce a red color (the color associated with the mad state).
  14. Output Red color on LEDs: This block is responsible for updating all of the LEDs in the design and making sure that all of them are red.
  15. Select appropriate sound (Mad sound): This block is responsible for selecting the appropriate sound that signifies that the Desk-Buddy is in the mad state.
  16. Output Mad Sound to speaker: This block is responsible for outputting the selected sound to the speaker connected to the Desk-Buddy
  17. if (State ==3): This block follows the false path of block #11. It is a decision block that checks to see if the State is equal to three (if the system is in the sad state).
  18. Select proper R, G, B values for LEDs (Blue Color): This block follows the true path of block #16. It is responsible for selecting the correct RGB combination to produce a blue color (the color associated with the sad state).
  19. Output Blue color on LEDs: This block is responsible for updating all of the LEDs in the design and making sure that all of them are blue.
  20. Select appropriate sound (Sad sound): This block is responsible for selecting the appropriate sound that signifies that the Desk-Buddy is in the sad state.
  21. Output Sad Sound to speaker: This block is responsible for outputting the selected sound to the speaker connected to the Desk-Buddy

End: This block simply notifies the user that the data flow has ended in the flowchart (its purpose is for the reader and it does not have any relevance in our software design). That is why we did not assign it a number.

 

2. Source code listings:

 

Capturing Image from the camera (Verilog)

`timescale 1ns / 1ps

 

module ov7670(clk, reset, vga_red, vga_green, vga_blue, v_sync, h_sync,

D, PCLK, XCLK, VSYNC, HREF, SIOC, SIOD);

input clk, reset, PCLK, VSYNC, HREF;

input [7:0] D;

output reg XCLK;

output wire v_sync, h_sync;

output wire [2:0] vga_red, vga_green;

output wire [1:0] vga_blue;

output wire SIOC;

inout wire SIOD;

wire rstb, href_fall;

reg wea, href_ff;

wire [7:0] doutb;

reg  [7:0] dina;

reg [15:0] addra, addrb, RGB565, addra_next;

reg [1:0] wr_hold;

//reg [6:0] hrefcnt;

reg [7:0] latched_d;

reg latched_href, latched_vsync, href_hold;

reg [1:0] line;

reg [6:0] href_last;

// Horizontal: 640 + 16 + 96 + 48

// Vertical: 480 + 10 + 2 + 33

parameter hcount = 799; // 640

parameter vcount = 524; // 480

parameter h_sp = 96;

parameter h_bp = 48;

parameter h_fp = 16;

parameter v_sp = 2;

parameter v_bp = 33;

parameter v_fp = 10;

reg [9:0] hcounter, vcounter;

ASIO ASIO(clk, reset, rstb);

I2C I2C(clk, rstb, SIOC, SIOD);

always @(posedge clk, negedge rstb)

if (!rstb) XCLK <= 1’b0;

else XCLK <= ~XCLK;

frame_buffer buffer (

.clka(PCLK), // input clka

.wea(wea), // input [0 : 0] wea

.addra(addra), // input [15 : 0] addra

.dina(dina), // input [7 : 0] dina

.clkb(XCLK), // input clkb

.addrb(addrb), // input [15 : 0] addrb

.doutb(doutb) // output [7 : 0] doutb

);

always @(negedge PCLK) begin

latched_d <= D;

latched_href <= HREF;

latched_vsync <= VSYNC;

end

always @(posedge PCLK)

if (latched_vsync)

addra <= 16’b0;

else if (wea) addra <= addra + 1’b1;

else addra <= addra;

always @(posedge PCLK) href_hold <= latched_href;

always @(posedge PCLK, negedge rstb)

if (!rstb) line <= 2’b0;

else if (latched_vsync)

line <= 2’b0;

else if (!href_hold && latched_href)

line <= line + 1’b1;

else line <= line;

always @(posedge PCLK)

if (latched_href)

RGB565 <= {RGB565[7:0],latched_d};

always @(posedge PCLK)

if (latched_vsync)

href_last <= 7’b0;

else begin

if (href_last[6])

href_last <= 7’b0;

else href_last <= {href_last[5:0],latched_href};

end

always @(posedge PCLK)

if (!latched_vsync && href_last[6] && (line==2’b10))

wea <= 1’b1;

else wea <= 1’b0;

 

always @(*)

// if (RGB565[10:6]-RGB565[15:11]>=3 && RGB565[10:6]-RGB565[15:11]<=26)

// dina <= 8’hFF;

// else dina <= 8’b0;

dina <= {RGB565[15:13],RGB565[10:8],RGB565[4:3]};

 

always @(posedge XCLK, negedge rstb)

if (!rstb) begin

hcounter <= 10’b0;

vcounter <= 10’b0;

end

else if (hcounter == hcount) begin

hcounter <= 10’b0;

if (vcounter == vcount)

vcounter <= 10’b0;

else vcounter <= vcounter + 1’b1;

end

else hcounter <= hcounter + 1’b1;

always @(posedge XCLK, negedge rstb)

if (!rstb) addrb <= 16’b0;

else if (addrb == 16’d19199)

addrb <= 16’b0;

else if ((hcounter < 160)&& (vcounter < 120))

addrb <= addrb + 1’b1;

else addrb <= addrb;

 

//vga_red   <= “000;

  //vga_green <= “111;

  //vga_blue  <= “11;

// Hcnt >= 656 and Hcnt <= 751

// Vcnt >= 490 and vcnt<= 491

assign v_sync = (vcounter >= 490 && vcounter <= 491) ? 1’b0 : 1’b1;

assign h_sync = (hcounter >= 656 && hcounter <= 751) ? 1’b0 : 1’b1;

assign {vga_red, vga_green, vga_blue} = ((hcounter < 160)&& (vcounter < 120)) ? doutb: 8’b0;

endmodule

 

//////////////////////////////////////////////////////

////             Synchronous Reset

////

//// Convert asynchronous reset to synchronous reset

//// and add an D-flip-flop to make it stable output

//////////////////////////////////////////////////////

module ASIO(clk, reset, rstb);

input clk, reset;

output rstb;

reg rstb, rstb_delay;

always @(posedge clk, posedge reset)

if (reset)

{rstb, rstb_delay} <= 2’b0;

else

{rstb, rstb_delay} <= {rstb_delay, 1’b1};

endmodule

 

module I2C(clk, rstb, scl, sda);

input clk, rstb;

output wire scl;

inout wire sda;

reg wen;

wire Idle, fail, done;

reg [23:0] din;

reg [6:0] counter;

always @(posedge clk, negedge rstb)

if (!rstb) wen <= 1’b0;

else if (Idle && (counter < 7’h49)) wen <= 1’b1;

else wen <= 1’b0;

always @(posedge clk, negedge rstb)

if (!rstb) counter <= 7’b0;

else if (done && (counter < 7’h49))

counter <= counter + 1’b1;

else counter <= counter;

always @(*)

case(counter)

// 7’h00: din = 24’h421280; // COM7   Reset

// 7’h01: din = 24’h421280; // COM7   Reset

7’h02: din = 24’h421204; // COM7   Size & RGB output

7’h03: din = 24’h421100; // CLKRC  Prescaler – Fin/(1+1)

7’h04: din = 24’h420C00; // COM3   Lots of stuff, enable scaling, all others off

7’h05: din = 24’h423E00; // COM14  PCLK scaling off

 

7’h06: din = 24’h428C00; // RGB444 Set RGB format

7’h07: din = 24’h420400; // COM1   no CCIR601

7’h08: din = 24’h424010; // COM15  Full 0-255 output, RGB 565

7’h09: din = 24’h423a04; // TSLB   Set UV ordering,  do not auto-reset window

7’h0A: din = 24’h421438; // COM9  – AGC Celling

// 7’h0B: din = 24’h424f40; //x”4fb3; // MTX1  – colour conversion matrix

// 7’h0C: din = 24’h425034; //x”50b3; // MTX2  – colour conversion matrix

// 7’h0D: din = 24’h42510C; //x”5100; // MTX3  – colour conversion matrix

// 7’h0E: din = 24’h425217; //x”523d; // MTX4  – colour conversion matrix

// 7’h0F: din = 24’h425329; //x”53a7; // MTX5  – colour conversion matrix

// 7’h10: din = 24’h425440; //x”54e4; // MTX6  – colour conversion matrix

// 7’h11: din = 24’h42581e; //x”589e; // MTXS  – Matrix sign and auto contrast

// 7’h12: din = 24’h423dc0; // COM13 – Turn on GAMMA and UV Auto adjust

// 7’h13: din = 24’h421100; // CLKRC  Prescaler – Fin/(1+1)

7’h14: din = 24’h421711; // HSTART HREF start (high 8 bits)

7’h15: din = 24’h421861; // HSTOP  HREF stop (high 8 bits)

7’h16: din = 24’h4232A4; // HREF   Edge offset and low 3 bits of HSTART and HSTOP

7’h17: din = 24’h421903; // VSTART VSYNC start (high 8 bits)

7’h18: din = 24’h421A7b; // VSTOP  VSYNC stop (high 8 bits)

7’h19: din = 24’h42030a; // VREF   VSYNC low two bits

7’h1A: din = 24’h420e61; // COM5(0x0E) 0x61

7’h1B: din = 24’h420f4b; // COM6(0x0F) 0x4B

7’h1C: din = 24’h421602; //

7’h1D: din = 24’h421e37; // MVFP (0x1E) 0x07  — FLIP AND MIRROR IMAGE 0x3x

7’h1E: din = 24’h422102;

7’h1F: din = 24’h422291;

7’h20: din = 24’h422907;

7’h21: din = 24’h42330b;

7’h22: din = 24’h42350b;

7’h23: din = 24’h42371d;

7’h24: din = 24’h423871;

7’h25: din = 24’h42392a;

7’h26: din = 24’h423c78; // COM12 (0x3C) 0x78

7’h27: din = 24’h424d40;

7’h28: din = 24’h424e20;

7’h29: din = 24’h426900; // GFIX (0x69) 0x00

7’h2A: din = 24’h426b4a;

7’h2B: din = 24’h427410;

7’h2C: din = 24’h428d4f;

7’h2D: din = 24’h428e00;

7’h2E: din = 24’h428f00;

7’h2F: din = 24’h429000;

7’h30: din = 24’h429100;

7’h31: din = 24’h429600;

7’h32: din = 24’h429a00;

7’h33: din = 24’h42b084;

7’h34: din = 24’h42b10c;

7’h35: din = 24’h42b20e;

7’h36: din = 24’h42b382;

7’h37: din = 24’h42b80a;

default: din = {24{1’b1}};

endcase

chuI2C chu(clk, rstb, din, wen, scl, sda, Idle, fail, done);

endmodule

 

module chuI2C(clk, rstb, din, wen, scl, sda, Idle, fail, done);

input clk, rstb, wen;

input [23:0] din;

output wire scl, fail;

output reg Idle, done;

inout wire sda;

parameter HALF = 249; // 10us/20ns/2 = 250

parameter QUTR = 125; // 10us/20ns/4 = 125

parameter C_WIDTH = 8;

// define states

localparam [3:0]

idle = 4’h0,

start = 4’h1,

scl_begin = 4’h2,

data1 = 4’h3,

data2 = 4’h4,

data3 = 4’h5,

ack1 = 4’h6,

ack2 = 4’h7,

ack3 = 4’h8,

scl_end = 4’h9,

stop = 4’hA,

turn = 4’hB;

reg [3:0] state_reg, state_next;

reg [C_WIDTH-1 : 0] c_reg, c_next;

reg [23:0] data_reg, data_next;

reg [2:0] bit_reg, bit_next;

reg [1:0] byte_reg, byte_next;

reg       sda_out, scl_out;

reg       sda_reg, scl_reg;

reg       ack_reg, ack_next;

always @(posedge clk, negedge rstb)

if (!rstb) begin

sda_reg <= 1’b1;

scl_reg <= 1’b1;

end

else begin

sda_reg <= sda_out;

scl_reg <= scl_out;

end

assign scl = scl_reg;

assign sda = sda_reg ? 1’hz : 1’b0;

assign fail = ack_reg ? 1’b1 : 1’b0;

always @(posedge clk, negedge rstb)

if (!rstb) begin

state_reg <= idle;

c_reg <= 8’b0;

bit_reg <= 3’b0;

byte_reg <= 2’b0;

data_reg <= 24’b0;

ack_reg <= 1’b0;

end

else begin

state_reg <= state_next;

c_reg <= c_next;

bit_reg <= bit_next;

byte_reg <= byte_next;

data_reg <= data_next;

ack_reg <= ack_next;

end

always @(*) begin

state_next = state_reg;

scl_out = 1’b1;

sda_out = 1’b1;

c_next = c_reg + 1’b1; // timer counts continuously

bit_next = bit_reg;

byte_next = byte_reg;

data_next = data_reg;

ack_next = ack_reg;

done = 1’b0;

Idle = 1’b0;

case(state_reg)

idle: begin

Idle = 1’b1;

if (wen) begin

data_next = din;

bit_next = 3’b0;

byte_next = 2’b0;

c_next = 8’b0;

state_next = start;

end

end

start: begin

sda_out = 1’b0;

if (c_reg == HALF) begin

c_next = 8’b0;

state_next = scl_begin;

end

end

scl_begin: begin

scl_out = 1’b0;

sda_out = 1’b0;

if (c_reg == QUTR) begin

c_next = 8’b0;

state_next = data1;

end

end

data1: begin

sda_out = data_reg[23];

scl_out = 1’b0;

if (c_reg == QUTR) begin

c_next = 8’b0;

state_next = data2;

end

end

data2: begin

sda_out = data_reg[23];

if (c_reg == HALF) begin

c_next = 8’b0;

state_next = data3;

end

end

data3: begin

sda_out = data_reg[23];

scl_out = 1’b0;

if (c_reg == QUTR) begin

c_next = 8’b0;

if (bit_reg == 3’h7)

state_next = ack1;

else begin

data_next = {data_reg[22:0], 1’b0};

bit_next = bit_reg + 1’b1;

state_next = data1;

end

end

end

ack1: begin

scl_out = 1’b0;

if (c_reg == QUTR) begin

c_next = 8’b0;

state_next = ack2;

end

end

ack2: begin

if (c_reg == HALF) begin

c_next = 8’b0;

state_next = ack3;

ack_next = sda;

end

end

ack3: begin

scl_out = 1’b0;

if (c_reg == QUTR) begin

c_next = 8’b0;

if (ack_reg == 1’b1)

state_next = scl_end;

else if (byte_reg == 2’h2)

state_next = scl_end;

else begin

bit_next = 3’b0;

byte_next = byte_reg + 1’b1;

data_next = {data_reg[22:0], 1’b0};

state_next = data1;

end

end

end

scl_end: begin

scl_out = 1’b0;

sda_out = 1’b0;

if (c_reg == QUTR) begin

c_next = 8’b0;

state_next = stop;

end

end

stop: begin

sda_out = 1’b0;

if (c_reg == HALF) begin

c_next = 8’b0;

state_next = turn;

end

end

turn: begin

if (c_reg == HALF) begin

done = 1’b1;

state_next = idle;

end

end

endcase

end

 

Endmodule

 

Finite State Machine / LEDs:

 

// Associate each number with its state name.

const int LOOKING_STATE = 0;

const int HAPPY_STATE = 1;

const int MAD_STATE = 2;

const int SAD_STATE = 3;

const int IDLE_STATE = 4;

const int INITIALIZE_TIMER = 5;

 

const int GREEN_LED = 11;  //Green LED signal going to RGB LED Strip

const int RED_LED   = 10;  //Red LED signal going to RGB LED Strip

const int BLUE_LED  = 9;   //Blue LED signal going to RGB LED Strip

const int FACE_DETECTED = 12; //Stimulates a face being detected

const int EXIT_IDLE = 13; // the number of the pushbutton pin

 

unsigned long previousMillis=0; //keep track of last updated time

static unsigned long currentMillis; //keep track of current time

 

void setup() {

 // initialize digital pin 13 as an output.

 pinMode(GREEN_LED, OUTPUT); // initialize GREEN_LED as an output.

 pinMode(RED_LED, OUTPUT);   // initialize RED_LED as an output.

 pinMode(BLUE_LED, OUTPUT);  // initialize BLUE_LED as an output.

 pinMode(FACE_DETECTED, INPUT); // initialize BUTTON_PIN as an input.

 pinMode(EXIT_IDLE, INPUT); // initialize BUTTON_PIN as an input.

} //close setup

 

void loop() {

 static int current_state = LOOKING_STATE; //Initialize to Looking State

 currentMillis = millis();

 switch(current_state){

case LOOKING_STATE:

  //PWM Combination for Gray color:

  analogWrite(RED_LED, 20);  // analogWrite values from 0 to 255

  analogWrite(GREEN_LED, 20);  // analogWrite values from 0 to 255

  analogWrite(BLUE_LED, 20);  // analogWrite values from 0 to 255

  if(digitalRead(FACE_DETECTED) == HIGH)

    current_state = HAPPY_STATE;

  else if((unsigned long) currentMillis – previousMillis > 120000 ){

    //Desk-Buddy has been looking for a face for 2 minutes (120000 * .001 = 120s = 2 minutes)

    //and has not found anyone. Go to the idle state…

    current_state = IDLE_STATE;

    previousMillis = currentMillis; //update current time value

  }//end if

  else

    current_state = LOOKING_STATE;

break;

case HAPPY_STATE:

  //PWM Combination for Green color:

  analogWrite(RED_LED, 0);  // analogWrite values from 0 to 255

  analogWrite(GREEN_LED, 150);  // analogWrite values from 0 to 255

  analogWrite(BLUE_LED, 0);  // analogWrite values from 0 to 255

  if(digitalRead(FACE_DETECTED) == LOW){

    previousMillis = currentMillis; //update current time value

    current_state = MAD_STATE;

  }//end if

  else

    current_state = HAPPY_STATE;

break;

case MAD_STATE:

  //PWM Combination for Red color

  analogWrite(RED_LED, 150);  // analogWrite values from 0 to 255

  analogWrite(GREEN_LED, 0);  // analogWrite values from 0 to 255

  analogWrite(BLUE_LED, 0);  // analogWrite values from 0 to 255

  if(digitalRead(FACE_DETECTED) == HIGH){

      previousMillis = currentMillis; //update current time value

      current_state = HAPPY_STATE;

  }//end if

  else if((unsigned long) currentMillis – previousMillis > 10000 ){

      //Desk-Buddy has been in the Mad State for 10 seconds (10000 * .001 = 10s).

      //Transition to the next state

      current_state = SAD_STATE;

      previousMillis = currentMillis; //update current time value

  }//end else if

  else

      current_state = MAD_STATE;

break;

case SAD_STATE:

  //PWM Combination for Blue color

  analogWrite(RED_LED, 0);  // analogWrite values from 0 to 255

  analogWrite(GREEN_LED, 0);  // analogWrite values from 0 to 255

  analogWrite(BLUE_LED, 150);  // analogWrite values from 0 to 255

  if(digitalRead(FACE_DETECTED) == HIGH){

      previousMillis = currentMillis; //update current time value

      current_state = HAPPY_STATE;

  }//end if

  else if((unsigned long) currentMillis – previousMillis > 5000 ){

      //Desk-Buddy has been in the Mad State for 5 seconds (5000 * .001 = 5s).

      //Transition to the next state

      previousMillis = currentMillis; //update current time value

      current_state = LOOKING_STATE;

  }//end else if

  else

      current_state = SAD_STATE;

Break;

case IDLE_STATE:

  //PWM Combination for Purple color:

  analogWrite(RED_LED, 50);  // analogWrite values from 0 to 255

  analogWrite(GREEN_LED, 0);  // analogWrite values from 0 to 255

  analogWrite(BLUE_LED, 50);  // analogWrite values from 0 to 255

  if(digitalRead(EXIT_IDLE) == HIGH){

    previousMillis = currentMillis; //update current time value

    current_state = LOOKING_STATE;

  }//end if

  else

    current_state = IDLE_STATE;

break;

default:

  //PWM Combination for Purple color:

  analogWrite(RED_LED, 0);  // analogWrite values from 0 to 255

  analogWrite(GREEN_LED, 0);  // analogWrite values from 0 to 255

  analogWrite(BLUE_LED, 0);  // analogWrite values from 0 to 255

  current_state = LOOKING_STATE;

Break;   

 }//end switch statement

 

} //close loop

 

Face Detection Algorithm (Verilog)

///////////////////////This goes in TopLvl with image proccessing////////////////////////////////////////////////////

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

wire [16:0] Y;

wire [16:0] U;

wire [16:0] V;

wire [7:0] R;

wire [7:0] G;

wire [7:0] B;

wire [7:0] Max;

wire [7:0] Min;

CRCB test(

.R({RGB565[15:11],3’b0}),

.G({RGB565[10:5],2’b0}),

.B({RGB565[4:0],3’b0}),

.Y(Y),

.U(U),

.V(V));

 

assign R = {RGB565[15:11],3’b0};

assign G = {RGB565[10:5],2’b0};

assign B = {RGB565[4:0],3’b0};

assign abs = (R>G)? (R-G) : (G-R);

 

max max(

.A(R),

.B(G),

.C(B),

.Y(Max)

);

min min(

.A(R),

.B(G),

.C(B),

.Y(Min)

);

assign dina = ((V<=((21’d1624*U)>>10)+21’d20) && (V>=((21’d353*U)>>10)+21’d76) && (V>=(21’d235-((21’d954*U)>>10)))

&& (V<=(21’d302-((21’d1178*U)>>10))) && (V<=(21’d482-((21’d2612*U)>>10))) && Y>8’d80

&& ((R>8’d95) && (G>8’d39) && (B>8’d23) && ((R+4)>G) && (R>B)

&& ((Max-Min)>8’d15))

|| ((R>8’d215) && (G>8’d207) && (B>8’d167) && (abs<8’d23) && (G>B) && (R>B)))

? 8’b00000011 : ((R<105) && (G<81) && (B<73)

&& ((R-G)<17) && ((R-B)<33) && ((B-G)<17))

? 8’b00011100 : {R[7:5],G[7:5],B[7:6]};

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

 

`timescale 1ns / 1ps

//////////////////////////////////////////////////////////////////////////////////

module CRCB(

input [7:0]R,

input [7:0]G,

input [7:0]B,

output reg [16:0]Y,

output reg [16:0]U,

output reg [16:0]V);

 

always@(*)begin

//cr= .439 -.368 -.071 +128

//cb= -.148 0.291 .439 +128

//cb = u, cr = v;

Y = ((8’d16) + ((16’d285*R)>>10) + ((16’d516*G)>>10) + ((16’d100*B)>>10));

U = ((8’d128) – ((16’d152*R)>>10) – ((16’d298*G)>>10) + ((16’d450*B)>>10));

V = ((8’d128) + ((16’d450*R)>>10) – ((16’d377*G)>>10) – ((16’d73*B)>>10));

end

Endmodule

//////////////////////////////////////////////////////////////////////////////////

//////////////////////////////////////////////////////////////////////////////////

 

`timescale 1ns / 1ps

module max(

input [7:0]A,

input [7:0]B,

input [7:0]C,

output reg [8:0]Y

);

always @(*)begin

if(A>B && A>C)

Y = A;

else if(B>A && B>C)

Y = B;

else

Y = C;

end

Endmodule

 

//////////////////////////////////////////////////////////////////////////////////

//////////////////////////////////////////////////////////////////////////////////

 

`timescale 1ns / 1ps

module min(

input [7:0]A,

input [7:0]B,

input [7:0]C,

output reg [8:0]Y

);

always @(*)begin

if(A<B && A<C)

Y = A;

else if(B<A && B<C)

Y = B;

else

Y = C;

end

endmodule

 

Speaker:

*Apparently being worked on, but no proof that any progress for the speaker has actually been made…

 

Construction:

1. Completed Task List

 

Task Leader Task Name Duration Start Finish
Michael Parra Research on Skin Detection (DONE) 8d 02/29/16 03/07/16
Michael Parra Finish 3-D Printing Desk-Buddy Segments (DONE) 7d 03/08/16 03/14/16
Skyler Tran Implement Code to Update Servo Positions (DONE) 3d 01/25/16 01/27/16
Skyler Tran Create 3-D Base Segments for Desk-Buddy (DONE) 2d 02/01/16 02/03/16
Skyler Tran Design Camera Interface for Desk-Buddy (DONE) 5d 02/04/16 02/08/16
Skyer Tran Capture Image Data (DONE) 5d 02/09/16 02/13/16
Skyler Tran Design PCB Schematic (DONE) 6d 02/15/16 02/20/16
Skyler Tran Implement PCB and send it to Manufacturer for Printing (DONE) 13d 02/22/16 02/29/16
Skyler Tran Create more 3-D parts for Desk-Buddy (DONE) 2d 03/03/16 03/04/16
Skyler Tran Design Power Supply (DONE) 3d 03/07/16 03/09/16
Skyler Tran Soldering Components on Custom PCB (DONE) 6d 03/27/16 04/01/16
Victor Espinoza Selecting LED Color (DONE) 5d 01/25/16 01/29/16
Victor Espinoza Create LED Driver (DONE) 2d 02/01/16 02/02/16
Victor Espinoza Implement LED Color changes using Arduino (DONE) 5d 02/08/16 02/12/16
Victor Espinoza Design State Machine for Desk-Buddy (DONE) 3d 02/15/16 02/17/16
Victor Espinoza Implement State-Machine using Arduino Uno (DONE) 5d 02/22/16 02/26/16

 

2. Layouts of custom circuit board

PCB Layout for FPGA:

 

Soldered Printed custom PCB:

 

3. Prototyping Boards Used

Arduino UNO (With RGB LED Strip attached to it):

 

4. Constructed boxes, structures, etc…

Power Supply: (Port1: 5V 4A, Port2: 12V 4A): The power supply provides the power for all of the components in the circuit. For our design we will use the Spartan-6 FPGA. The Spartan-6 FPGA will draw approximately 200 mA. The 5 servos draw a total of 2.5 A; each draws 500 mA. We have 10-20 5mm RGB LEDs that will draw 200 mA; an LED draws 20 mA at 2V. The camera will draw about 20 mA of current. The total current needed for our design is less than 4 A, but we need to make sure that we are providing sufficient current to each component, meaning that we want to produce more current than needed. As such, we decided to have our power supply create 4 A.

 

3-D printed parts: These are the 3-D printed segments of our Desk-Buddy. They have not been connected together yet because the person in charge of this task (Jose Trejo) has not been working on his tasks.

5. Task to be Completed:

  1. Provide a list of those tasks that have not yet been completed.  Once again, a one-line list, not a paragraph list.

 

Task Leader Task Name Duration Start Finish
Michael Parra Connect Servos to Desk-Buddy 2d 03/19/16 03/20/16
Michael Parra Connect Camera to Desk-Buddy 2d 03/25/16 03/27/16
Michael Parra Implement Skin Detection Algorithm 10d 03/29/16 04/07/16
Michael Parra Verify Facial Detection Range Engineering Specification 5d 04/08/16 04/12/16
Michael Parra Verify Image Processing Engineering Specification 5d 04/14/16 04/18/16
Michael Parra Verify Following a Face Engineering Specification 5d 04/20/16 04/24/16
Skyler Tran Balancing Servos on the Desk-Buddy 9d 03/14/16 03/22/16
Skyler Tran Verify PCB Design 6d 04/09/16 04/14/16
Victor Espinoza Combine LED, Servo, and Speaker code into State Machine 5d 03/10/16 03/14/16
Victor Espinoza Debug State Machine Logic 5d 03/15/16 03/19/16
Victor Espinoza Connecting LEDs on the Desk-Buddy 2d 03/23/16 03/24/16
Victor Espinoza Verify State Transition Accuracy and Timing Constraints 5d 03/25/16 03/29/16
Victor Espinoza Create User Manual For PCB Design 13d 03/31/16 04/12/16
Jose Trejo Implement Sound Software using Arduino Uno 5d 03/03/16 03/07/16
Jose Trejo Implement Power Supply 5d 03/08/16 03/12/16
Jose Trejo Create Speaker Driver 2d 03/13/16 03/14/16
Jose Trejo Connect 3-D Printed Segments Together 10d 03/15/16 03/18/16
Jose Trejo Connecting Speaker / Speaker Driver to Desk-Buddy 2d 03/21/16 03/22/16
Jose Trejo Create Engineering Manual for Final Design 13d 03/28/16 04/09/16

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s