CECS 490A – Final Project Report

CECS 490A – Final Project Report

Due : 12/15/15

 

SERVOARM

 

Skyler Tran

Victor Espinoza

Michael Parra

Jose Trejo

 

General Description: Our project revolves around a robotic arm whose segments are controlled by five different servos. It is able to detect a human face using facial detection and follow their face around. It is also able to exhibit common emotions using LEDs and noise outputs.

Table of Contents:

*Will look alot better when I export the final document to a word document and use their table of contents formats, but this will have to do for now.

Table of Contents:

Team Name:

SERVOARM

Team Members:

Skyler Tran :

Victor Espinoza:

Michael Parra:

Jose Trejo:

Introduction:

Product Diagram:

Refined Product Description:

Purpose:

Functionality / packaging:

Project Overview:

Existing Products:

  1. Pinokio (http://www.ben-dror.com/pinokio)
  2. Mira (https://www.youtube.com/watch?v=A7iAMELA_TY)

Project Objectives:

Specifications:

User Specifications:

Functional Specifications:

Engineering Specifications:

Verification of Engineering Specifications:

User Interface:

Theory:

Hardware Design:

Block Diagram:

Original Hardware Design:

Schematic:

Hardware Task List:

Software Design:

Original Software Design:

Top-Level Software Block Diagram:

Software Flowchart:

Top-Level Software Flowchart:

Overall Flowchart Description:

Flowchart:

Block Descriptions:

Look For Face Flowchart:

Overall Flowchart Description:

Flowchart:

Block Descriptions:

Move Servos Flowchart:

Overall Flowchart Description:

Flowchart:

Block Descriptions:

Follow Face Flowchart:

Overall Flowchart Description:

Flowchart:

Block Descriptions:

LED/Speaker Flowchart:

Overall Flowchart Description:

Flowchart:

Block Descriptions:

Software Task List:

Gnatt Diagram:

Costs:

Above is a list of all major components that are going to be used in creating the Desk-Buddy. The total cost of the components comes out to $96.35. To develop the first Desk-Buddy it will take us about 800 hours. These hours are broken down in the Hardware Task List. This is only for the very first production of the Desk-Buddy. After this the time should be cut in half considering since we would have the engineering manual and other task completed that can be used again. So to produce 1000 units it would take a total component cost of  $96350.00 and to produce those 100 units it will take approximately 400,000 hours. We estimate that it will cost about an additional hundred dollars or so to create our custom PCB (we will create 5 of them just in case some of them are defective). The cost for the final product with our custom PCB added to it will cost us around $300 (with building time taken into account as well). If we are mass-producing our custom PCB board, then this price will become significantly cheaper and there will be a higher return on investment.

Conclusions:

Appendix A:

Appendix B:

Team Name:

     SERVOARM

Team Members:

Skyler Tran :

Skyler Tran was born in Vietnam and came to live in California at the age of 12 years old. He is interested of studying computers, business, and becoming an entrepreneur. He enjoys reading the news, listening to music, learning, studying, eating, shopping, traveling, playing video games, and playing basketball. His current goal is to obtain a Computer Engineering degree at Cal State University, Long Beach.

Skyler is in charge of capturing the image from the camera to send to fpga, updating the servo positions so they can move the Desk-Buddy (the name of our robotic arm), balancing the servos, creating the dual-port power supply, making a custom PCB, and verifying the PCB design.

Victor Espinoza:

Victor Espinoza is a senior student in the Computer Engineering department at California State University, Long Beach. He enjoys listening to music, fishing, camping, archery, and learning about the many areas of discipline in the field of Computer Engineering.

Victor is in charge of distinguishing between the different states that the robotic arm can be in at any given time (idle, happy, sad, and mad). Victor is also responsible for making sure that the Desk-Buddy transitions correctly between the different states. Additionally, he is responsible for saving the image data taken by the camera to memory. Finally, Victor is responsible for selecting and displaying the correct LED colors on the Desk-Buddy to help it convey different emotions (happiness, sadness, and anger).

Michael Parra:

Michael Parra is an El Camino Community College Alumni, who received an Associates degree in Physics, Mathematics, and Pre-Engineering.  After four years at the Community College he later transferred to CSULB as a Computer Engineering major, in which he will hopefully graduate from in Spring 2016.  Michael has aided in programming the Raspberry Pi and Arduino to control a Mars Rover for the NASA RASC-AL Robo-Ops team at CSULB.

Michael is in charge of understanding the Face Detection Algorithm, and the way it will be implemented into the Desk-Buddy’s design. He is also in charge of implementing the facial detection aspects of the design and getting the Desk-Buddy to actually recognize a face that appears on an image.  

Jose Trejo:

Jose Trejo is a student at California State University, Long Beach majoring in Computer Engineering. He loves playing video games, listening to music and hangout out with friends. He also enjoys the many cool and exciting aspects there are to learn about Computer Engineering.

Jose is in charge of creating an amplifier circuit for the speaker, connecting the speaker to the Desk-Buddy and making sure that it will output the appropriate sounds depending on its mood or personality. He is also in charge of the user interface and making sure the design is what the team wants and for making sure that the Engineering Manual for PCB Design comes together fully.

Introduction:

To the naked eye the Desk-Buddy may appear to be just a normal lamp, but it is something a lot more unique. The Desk-Buddy is actually not a lamp at all; it is an interactable robotic arm equipped with a camera that can detect a person’s face via facial detection. The Desk-Buddy has four main segments that are all controlled by servo motors. The servos are strategically placed on the different segments of the Desk-Buddy in order to allow it to move up and down and left to right. There are also servos attached to the segment that houses the camera, which allows it to move the camera freely (in the up, down, left, and right directions). This allows the Desk-Buddy to move around and look for people to interact with. The Desk-Buddy also includes changing light functionality in order to help it exhibit different emotions (happy, sad, mad). One such example would be if the Desk-Buddy is not able to find anybody to interact with, which would result in it changing its lights to a blue color to signify sadness. To further help the Desk-Buddy convey emotions, it also has a speaker attached to it which allows it to make certain noises based on the emotion that that it is trying to convey.

The camera will take pictures and process the images at a frame rate of 3 frames per second. It will save the images to memory and process them using a facial detection algorithm (we decided to use the Viola Jones algorithm for this project). Once the image is processed, the results will then determine whether we should update the servos, LEDs, and speaker noise.

The Desk-Buddy is strictly made for the entertainment of the user so that they can have a fun device to interact with. The final product with everything connected to it will weigh less than 12 pounds.

Product Diagram:

This is more or less what we plan our final product to look like:

 

Refined Product Description:

Purpose:

The purpose of creating the Desk-Buddy is to make a friendly companion that can display different personalities for the user. Put simply, the Desk-Buddy is strictly made for the entertainment of the user. It can recognize a person’s face and try to interact with him/her in order to try and understand them. The Desk-Buddy is very needy though and always wants all of the user’s attention. Its behavior will change based on how much user-interaction it is receiving. It also displays different emotions/moods using LEDs and it can also communicate with the user by outputting simple noises to express different sentiments.

Functionality / packaging:

For this project we will create a robotic arm called the Desk-Buddy that has four segments controlled by servos as well as a camera attached to it. The camera will be used to detect and follow a person’s face. The arm will constantly adjust its position using the servos so that it is always staring at the user. The arm will also use LED lights to reflect its moods based on how much the user is interacting with it. For example, if the user covers his/her face, the arm will display a sad mood (blue) using the LED lights. The arm would then constantly search for the user’s face until it detects it, at which point it would become happy and display the appropriate LED color scheme (green).  It would also be able to show different personalities via motional expression and sound. The Desk-Buddy will achieve this by moving its servos around and by using an attached speaker to make certain sounds that insinuate different emotions. Our approach to this project is to use rapid prototyping to get our project up and running and then to create our own custom printed circuit board based on our design. We will then substitute in our custom printed circuit board with the one used in our rapid prototyping implementation and make sure that all of the functionality is still maintained.

 

Project Overview:

It’s been thousand of years and people have always dreamt about being able to invent a robot that can think, talk, walk, and sense for itself, just like a human being. The very first robots had only basic functions like walking and flashing lights which did not have embedded system built into them. At the time that embedded system were invented, the robots could talk and produce sounds from data that was stored in memory. For centuries, our robots had not made a big change until today. Today anybody can build a robot and they do not require expensive tools to build one either. Building a robot does not cost much. For example, you can buy a microcontroller for $10, a couple of sensors, some servos, etc… and be set to build a robot. Robots in today’s day and age are far better than those from the past because we have sensors and processing units that make robots more human-like.

Our Desk-Buddy is not just a robot. It will be invented to be our companion, but the main purpose of making Desk-Buddy is to have a hands-on experience with facial detection. This is because people can benefit a lot from facial detection in the future. For example, we could have televisions wi in the near future that have facial detection built into them and they can turn off by themselves when nobody is in the room watching them. Another example is the security camera; with integrated facial detection, it can more freely keep track of objects or people that pass by it.

The costs of implementing facial detection are relatively cheap due to the fact that electronic parts today are cheaper than in the past. Businesses and industry can use facial detection in their workplace security and use it to elevate products to whole other level and to create new fun and exciting products.

To get a better grasp about the specific technologies that are being used in our project, please refer to the Theory section on page 17 and the Software Design section on page 29. We did not want to be overly redundant in posting about the different technologies that we are designing around.

Existing Products:

1. Pinokio (http://www.ben-dror.com/pinokio)

 

 

Pinokio
Similarities: Pinokio

Advantages:

Pinokio

Disadvantages:

Desk-Buddy

Advantages:

Desk-Buddy

Disadvantages:

Estimated Pinokio

Cost:

Estimated

Desk-Buddy

Cost

based off of a lamp Easier to program using C / C++ with libraries must be connected to a computer (OpenCV) overall product will be a lot lighter by using 3-D printed parts instead of metal Harder to program in Verilog $300 $300
servo functionality very similar Simple Design Made from metal (heavier) Attempting to do facial detection without OpenCV added functionalities makes the overall design more complex
project using facial detection Product is already finished Not as much added features as our product Desk-Buddy doesn’t have to be connected to a computer Product has not been developed yet.
easy modification metal parts are sturdier added LED functionality to help express different emotions 3-D printed parts weaker than metal parts
modular design added sound output to further enhance emotional expression
customizable
external power for components
Concluding Statement: At the core our project has the same base functionalities as the Pinokio project; moving around using servos, having a camera that is used to detect a person’s face, and having the robot constantly move around in order to try and detect the person’s face. Our product, however, will be better than the Pinokio because we are integrating both sounds and LED lights to help our robotic arm express different personalities. These additions make our project more advanced than the Pinokio project and will result in a more sophisticated project.

 

 

2. Mira (https://www.youtube.com/watch?v=A7iAMELA_TY)

 

Mira
Similarities: Mira

Advantages:

Mira

Disadvantages:

Desk-Buddy

Advantages:

Desk-Buddy

Disadvantages:

Estimated

Mira

Cost:

Estimated

Desk-Buddy

Cost

changing LED colors compact design not customizable easy to modify Not as portable as Mira $200 $300
project using facial detection Very portable hard to modify uses a power supply (can be plugged into a wall) Bulkier and heavier
outputs noises to convey emotions well-designed (no wires hanging out) battery operated able to express different emotions
only seems to be able to express excitement able to move more freely than mira
Concluding Statement: The Mira robot works really well at exhibiting excitement by making noises and alternating the LED colors that it is displaying. This, however, is the only emotion that the Mira is able to display. Our project will be better than the Mira because we are going to be able to mimic a lot more emotions such as happiness, sadness, and anger. These emotions will be demonstrated by combining servo movement, changing the LED colors, and by having the robot make certain sounds. Our project will also offer a freer range of motion than the Mira robot.

 

Project Objectives:

The ultimate objective of this entire project is to create a desk buddy.  To accommodate the senior design objective of 80% self hardware design, our team will use the nexys2 FPGA (Field Programmable Gate Array), with some consideration of using a more advanced FPGA.  Our first objectives will be implementing sound and lighting that our desk buddy will use to personify emotions.  Secondly, we will practice PWM (Pulse Width Modulation) on servo motor control of the lamp’s joints. Third, we will be implementing and connecting a camera module to the FPGA, and processing the image.  After image processing we will begin to implement some type of image processing Algorithm to allow for human detection and tracking of a face.  Lastly, we will be piecing our entire project together to create our desk buddy. Once we get this rapid programming implementation up and running, we will then create a custom PCB that can accommodate the different components in our design. We will also create our own dual-port power supply that has one port outputting 5V and the other port will be outputting 12V. The 5V port will also be connected to a 3.3V voltage regulator. This will be done so that we can provide the proper voltage to all of the components in our design. Finally, we will design our own LED driver so that we can connect the LEDs to the FPGA and have them working properly.

Specifications:

 

User Specifications:

Parameter Min Max Units Comments
Overall Size Overall size of the Desk-Buddy will be 15.9in x 13in x 17.3 in (length, width, height)
Material Used We will be using 3-D printed parts to create the different segments of the Desk-Buddy
Overall Weight 5 12 lbs The overall weight of the device will be less than 12 lbs.
Power Connection The Desk-Buddy will be plugged into a wall outlet.
Power 3.3 12 Volts
Battery Life The device will be powered on as long as it is plugged into a wall outlet and the power switch is turned on.
Cost 300 Dollars It is estimated that the total cost of the Desk-Buddy will be around $300 (including component and building time costs).

 

Functional Specifications:

Parameter Min Max Units Comments
Servo Motors 5 Servo Motors Our design will have 5 servos that help the Desk-Buddy move in different directions

-(Servo_1): controls the base of the arm which will turn from left to right.

-(Servo_2): controls the base appendage of the Desk Buddy, allowing it move up and down.

-(Servo_3): helps the Desk Buddy lean forwards and backwards.

-(Servo_4): helps the camera move up and down

-(Servo_5): helps the camera move left and right

RGB LED Strip 1 LED Strip We are adding an RGB LED strip to the design so we can easily change colors to help the Desk-Buddy express different emotions/moods.
RGB LED Colors 4 Colors Right now we are using 4 LED colors to help express the different moods that the Desk-Buddy is in.

  1. Gray – Idle State / Mood
  2. Green – Happy State / Mood
  3. Blue – Sad State / Mood
  4. Red – Mad State / Mood
Desk-Buddy Emotions / State 4 Emotions So far we have it so that the Desk-Buddy will be able to show four different states / emotions.

  1. Happiness
  2. Sadness
  3. Anger
  4. Indifference (Idle)
Anger State Time Limit 0 10 Seconds The Desk-Buddy will stay in the Angry state / mood for up to 10 seconds, at which point it will switch over to the sad state. If the Desk-Buddy detects a face before this time period is up, it will automatically change to a happy state and disregard its time spent in this state altogether.
Sadness State Time Limit 0 5 Seconds The Desk-Buddy will stay in the sad state / mood for up to 5 seconds, at which point it will switch over to the indeterminate idle state. If the Desk-Buddy detects a face before this time period is up, it will automatically change to a happy state and disregard its time spent in this state altogether.
User Interaction 2 Interactions For the time being there are only two user interactions. Once the project is up and running we are considering adding more.

  1. The User is looking at the Desk-Buddy
  2. The User is not looking at the Desk-Buddy
Facial Detection 1 Person Our design is able to detect one face on an image at a time. In other words, the Desk-Buddy will only detect one person if there are a group of people looking at it.
Speaker Noises 5 Noises As of now, our speaker will make 5 different noises

  1. It will make a boot-up noise when it is turned on to inform the user that it is active
  2. It will make a happy noise whenever it enters the happy state if it was in a different state before entering the happy state.
  3. It will make a sad noise whenever it enters the sad state if it was in a different state before entering the sad state.
  4. It will make a mad noise whenever it enters the mad state if it was in a different state before entering the mad state.
  5. It will make an idle noise whenever it enters the idle state if it was in a different state before entering the idle state.
Programming Language 2 We will be using the following programming languages for our design:

  1. Verilog will be used to create the code for the majority of our software.
  2. We will use Matlab to help us tackle facial recognition and then port the finished Matlab code over to verilog.
Switch 1 Switch The slide switch will be used to turn the Desk-Buddy on or off (assuming that it is connected to power).
Push Button 1 Push button The push button will be used whenever the Desk-Buddy is in the indeterminate idle state. When it enters this state, it will do nothing until the push button is pressed, which will let the Desk-Buddy know that somebody wants to interact with it. It will leave the indeterminate idle state and begin looking for a face after the push button is pressed.

 

Engineering Specifications:

Parameter Min Max Units Comments
Power Supply 3.3 12 Volts We will be making our own power supply. This power supply will be a dual-port power supply. One port will provide the system with 5V / 4A and the other port will provide the system with 12v / 4A. We need two different ports because the RGB LED  strip requires a 12V input and the servos will be powered using the 5V input. All other components will be powered via a voltage regulator (3.3V)
Power Supply Current 1 4 Amps
Memory 8 Meg x 16 MB On board Micron Memory (DRAM)
Voltage Regulator 1.75 3.3 Volts We will use a voltage regulator to help us power the additional components in our design that are rated for lower voltage ratings than what we are providing.
Rapid Prototyping Implementation Device 1 We will be using a Nexys2 FPGA board to help us achieve the rapid prototyping phase of our project. Once we have the rapid prototyping project running, we will then create our own custom PCB and rebuild the project around it. The Nexys2 has a Spartan-3E FPGA connected to it. In our final design we will substitute the Spartan-3E for the Spartan-6 because the Spartan-6 is easier to find and it has more pins that we can use.
Nexys2 FPGA Clock speed: 50 MHz The Nexys2 has a 50MHz crystal oscillator that connects to the global clock input pins on the FPGA so they can drive the clock synthesizer blocks available in FPGA.
FPGA Power Connection USB to USB2
Overall Weight 5 12 lbs The overall weight of the device will be less than 12 lbs.
Servo Torque 8 11 kg-cm Each Servo will be able to move at a torque of 11kg-cm
Servo Movement 0 180 Degrees Each servo will be able to move between 0 and 180 degrees. That being said, not every servos in our design will be moving the full 180 degrees.
Servo Increments 5 Degrees Each servo will move in 5 degree increments.
Servo Movement Speed 20 Degrees Each servo will be able to move up to 20 degrees per second.
Camera Image Processing 1 3 Frames Per Second (FPS) The system will be able to take and process images at a frame rate of 3 frames per second
Camera Image Size 320 x 240 Pixels Each image will be saved using a 320 x 240 picture size.
Facial Detection Range 1 3 Ft. The maximum range at which the Desk-Buddy can detect a face away from it is 3 ft.
Recommended Operating Temperature: 60 120 Degrees Fahrenheit
System Standby Current 100 milli-Amps The standby current of our overall design will be about 100mA.

 

We will also need to accommodate the following parts into our design to get everything to work:

A few of capacitors and resistor for various components in design

A simple amplifier circuit for the speaker.

Breadboard for component connections

Complementary power darlington transistors for LED strip

Overall Design Power consumption:

  • Voltage: 3.3V – 12V
  • Current: 4A max
  • Standby current: 100mA

Verification of Engineering Specifications:

 

Verification #: Verification Name: Verification Description
1 Facial Detection
  • be able to detect a face on an image taken by the camera from up to three feet away from it.
  • Verification Method: We will verify this by standing in various positions away from the arm and making sure that it detects a face anywhere between ½ a foot and three feet away from it.
2 Image storing / processing
  • Image processing will occur at a frame rate of 3 frames per second. (at least 3 images will be taken/processed each second)
  • Verification Method: We will make sure that each image taken by the camera is scaled to the proper size and that our code can retrieve the image data and process at least 3 images each second.
3 Servo Movement
  • 0-180 degree movement
  • 20 degree movement per second
  • 5 degree increments
  • torque of 11kg-cm
  • Verification Method: We will attach the servos to our Desk-Buddy and make sure that each servo moves in 5 degree increments. We will also make sure that each servo can move 20 degrees each second. and that they can each move up to 180 degrees.
4 Mad and Sad State Duration
  • When in a mad state, the Desk-Buddy will stay in the Mad State for up to 10 seconds (unless a face is detected at which point it automatically enters the Happy State and abandons the Mad State altogether)
  • When in a sad state, the Desk-Buddy will stay in the Mad State for up to 10 seconds (unless a face is detected at which point it automatically enters the Happy State and abandons the Sad State altogether)
  • Verification Method: We will make sure that the Desk-Buddy displays the correct color scheme and noise outputs for the appropriate time intervals by prompting it to enter the different mood states (idle, happy, mad, sad).
5 Following a Face
  • When the Desk-Buddy detects a face, it will be able to follow that person around so that it is always looking at the person (as long as the Desk-Buddy can still move its servos to achieve this and the servo moving limits have not been reached).
  • Verification Method: We will make sure that the Desk-Buddy is able to stay focused on a person’s face and follow it around in all directions.

 

User Interface:

We are not going to make an accompanying application for our project, so the user interface for our design will consist of only one switch and a push button on the base of our robotic arm. This switch will be used to turn our product on/off. The push button is used when the robotic arm is in the idle state. Asserting the push button will result in the Desk-Buddy leaving the idle state and starting to search for a face again. In this way, the push button acts as an external interrupt to allow the Desk-Buddy to leave its idle state. For now these are the only two objects comprising our user interface in our design. Once we have successfully completed and implemented our design and if time permits, we would then add more switches to our product that are in charge of additional modes, such as a music mode where the robotic arm dances along to the music. If the music mode is already enabled then changing the switch to the off position would return the robotic arm to its default state (detecting faces and expressing emotions). Once again, these extra switches/modes would only be added in if there is enough time and only after we have successfully implemented our design. In case of an error or malfunction. The power button will be used to turn off the device and turn it back on to restore the device to default state.

 

Theory:

The practicality of this project solely revolves around the Face Detection Algorithm.  As of now our team is still trying to figure out which way would be the most practical way of tackling facial recognition.  The Algorithm we are focusing on is the Viola Jones Algorithm, which uses a combination of other algorithms to work.  Those algorithms are Haar Feature selection, Adabost Training, and Cascading Classifiers.  In research our group has found three distinct ways we can implement the face detection algorithm.  The easiest using the raspberry Pi, and OpenCV.  Second would be to use an IC provided by Texas Instruments known as the  “TMS320DM36x Digital Media System-on-Chip (DMSoC) Face Detection”.  However this would be impractical to use because it is 16mm by 16mm, which is equivalent to a little more than half an inch for length and width.  Furthermore, it has about 300 to 400 pins that would need to be wired.  We have the capability of doing this if we send it to a PCB (Printing Circuit Board) shop.  Lastly, is implementing the Viola Jones Algorithm in Verilog onto our FPGA.  

Given that the FPGA is our source of our project, we will try implementing image processing and Viola Jones in Verilog.  Given that the Viola Jones Algorithm uses many other Algorithms to compromise it’s work, there might be some allowance of using a generic image processing algorithm to detect simple facial features, or skin color.

Primarily the way the Viola Jones Algorithm starts, is by converting the colored image into a grayscale image.  Secondly, it undergoes a rectangular integral imaging process, where it takes square or rectangular like portions of the picture and averages the pixels in that area.  A higher number resulting in a darker area, and a smaller number whiter/lighter area.   Normally the algorithm starts at the top left of the image and uses smaller scaled squares.  Once it reaches the bottom right of the image the Algorithm makes the Haar Selection squares larger, and again starts off at the top left until it reaches the bottom right. To speed up this lengthy process Adaboost training and Cascading Classifiers help.  Adaboost training is used to help sequence the Haars Selection squares.  The Cascading Classifier lets the Haars Selection know that a face is nearby, so the likelihood of detecting a face a pixel over is highly probable.  Furthermore, at the end of the image processing the Cascading Classifier locates the face by averaging all the nearby detected faces.

 

LED darlington transistor calculations:

We must run 12V to power the RGB LEDs. Each segment of the SMD5050 RGB strip consists of 3 LEDs. Each segment string of LEDs draws approximately 20 milliAmps from the 12V supply. This means that there is a maximum of 20mA draw from the red LEDs, 20mA draw from the green LEDs, and 20mA from the blue LEDs for each segment. If we have the LED strip on full white (all LEDs are lit) that would be 60mA per segment.

Our SMD5050 RGB LED strip contains 30/LEDs per meter strip (there are 10 segments per meter). To find the total maximum current draw per meter, we would multiply 60mA x 10 (ten segments per meter for the 30/LED per meter strip) = 0.6 Amps per meter. This is assuming, however that we have all of the LEDs on at once (which displays a white color) which are receiving power from a 12V supply. We are going to be using less than a meter of the RGB LED strip, however, so we are estimating that we will need to drive about 0.4 Amps to the LEDs if we have them all on.

 

Speaker Amplification: We must create an amplifier circuit for our 8ohm speaker. This amplifier will deliver the sound to the speaker. It will increase the sound and clear up the sound as well. We will be using the LM386 in order to accomplish this. We are currently still looking at different circuits that involve the LM386 as it seems to be the best choice thus far.

Hardware Design:

 

Qty Price Description Part # Supplier
5 $20 Hi Torque Servos MG966R eBay
Reason we chose this item: It is high torque, 11kg-cm.
1 $20 Spartan 6 XC6SLX9-2TQG144C eBay
Reason we chose this item: This spartan 6 has 144 pins that we can solder easier than the other types.
1 $6.50 SDRAM 256Mbit 16Mbit x 16 MT48LC16M16A2P-75IT eBay
Reason we chose this item: This memory has a lot of space and is easy to solder.
1 $1.24 8Mb SPI Flash SST25VF080B-80-4I eBay
Reason we chose this item: This ROM has enough space for us to load our firmware.
1 $6.95 FPGA PROM XCF04S eBay
Reason we chose this item: we need this for FPGA programing.
1 $2.65 3.3V 1.5A VRegulator LT1086-3.3 Digikey
Reason we chose this item: the spartan 6 requires 3.3V to operate and 1.5A is enough for the whole circuit.
1 $4.02 100MHz 3.3VOsc. HCMOS/TTL CTX318LVCT-ND Digikey
Reason we chose this item: it has a decent speed for spartan 6 and also has 3.3V.
1 $2.12 USB to UART   MCP2200-I/SO-ND Digikey
Reason we chose this item: we need to program our board with usb instead of RS232
1 $10 LED Light Strip SMD5050 RGB Adafruit
Reason we chose this item: We chose to use an RGB LED light strip so that we could dynamically choose what color will be displayed on the LEDs. This makes it so that we do not have to have separate lines of LEDs connected together to get the right colors to show up (the LEDs are connected internally in the RGB LED design along with resistors as well).
3 $5 Complementary power Darlington transistors TIP120 Adafruit
Reason we chose this item: We chose this item because it can easily handle the amount of current that we need to drive the LED strip. (it is really a lot of overkill because it can handle up to 5A and we are driving less than 1A for the LEDs, but it is better to be safe than sorry).
1 $11.96 Camera 640×480 OV7670 FIFO AL422 CMOS eBay
Reason we chose this item: This camera module comes with onboard RAM, to make programing to its memory easier.
1 $1.50 Mini Speaker 8 ohm 0.5 W KS-3008 Adafruit
Reason we chose this item: It is a durable and cheap speaker that should be able to easily fit into our design.

 

Block Diagram:

Original Hardware Design:

 

  • After we get our hardware components to work, we are going to design a custom PCB with the FPGA, voltage regulator, 3.3V converter, SD memory module, speaker, some GPIOs, and a USB UART controller. We will then substitute in our custom PCB in place of the microcontroller that we are using.
  • We are also going to design our own dual port power supply (Port 1 will produce 5V / 4A and Port 2 will produce 12V / 4A).
  • We will also design the Driver for the LEDs.
  • The rest of components may be re-designed as needed for our project.

 

 

Dual Port Power Supply (Port1: 5V 4A, Port2: 12V 4A): The power supply provides the power for all of the components in the circuit. For our design we will use the Spartan-6 FPGA. The Spartan-6 FPGA will draw approximately 200 mA. The 5 servos draw a total of 2.5 A; each draws 500 mA. We have 10-20 5mm RGB LEDs that will draw 200 mA; an LED draws 20 mA at 2V. The camera will draw about 20 mA of current. The total current needed for our design is less than 4 A, but we need to make sure that we are providing sufficient current to each component, meaning that we want to produce more current than needed. As such, we decided to have our power supply create 4 A.

 

Components Power
Spartan-6 FPGA 200 mA
Speaker 200 mA
Servos (5) 2.5 A
LEDs (10 – 20) 200 mA
Camera 20 mA
Total: 3.12 A

 

Power Switch: The power switch will turn on and off for the whole circuit. On the power switch, we might have to add a capacitor to prevent the spark and to smooth out the electricity. The reason the power switch may have the spark is because as the switch is closed the electricity begins to jump over the switch from a terminal to other terminal. The wire and switch are made of different materials, so that makes the electricity unbalanced the electricity runs through the switch when the switch is closed.

 

3.3V Converter: The 3.3 volt converter converts the 5V power supply to 3.3V for use with the camera, because it requires the use of 2.5 – 3.3V . On this 3.3V converter, we are going to have a step down voltage regulator and a couple of capacitors from the input and output to smooth out the DC voltage coming out of the converter.

 

FPGA: We going to use the Spartan-6 FPGA in our design. We chose the Spartan-6 because we believe that it will be capable of efficiently processing the images that it receives from the camera in order to determine whether there is a person’s face in that frame or not.

 

Speaker: The speaker will output robotic sounds in order to communicate with the user. It can output high and low pitches which result in different groups of sounds. For example, it will output a high pitch at a fast frequency to express a happy mood. In the sad mood, the speaker will output a low pitch at a slow frequency to mimic a sad noise.

 

Speaker Amplifier: We will be apparently be using the LM386 in order to amplify our speaker output.

 

Servos: There are 5 servos in our design. Each servo controls a different position on the arm. We have a servo controlling the base of the arm which will turn from left to right (Servo_1). There is another servo controlling the base appendage of the Desk Buddy, allowing it go up and down (Servo_2). The third servo helps the Desk Buddy lean forwards and backwards (Servo_3). The fourth servo helps the camera to move up and down (Servo_4). The fifth servo helps the camera to move left and right (Servo_5).

 

Servo Driver: We will be using the L293D to drive the servos. Each L293D can drive two servos, so we need three L293D. L293D is outputting 600 mA max that can drive our high torque servo.

And the driver enable will be driven from the FPGA which tells it when to turn the servo’s power on and off.

 

LEDs: We plan to use RGB LEDs for this project so that we can alter the color being displayed on the LEDs in order to simulate different moods. The way that RGB LEDs work is that all of the colors available come from a combination of different values for the Red, Green, and Blue colors. Each color can be filled with a value from 0 to 255 (meaning that each color is 8 bits). Each color is achieved by combining red, green, and blue LEDs together.

 

LED Driver: In order to drive the LEDs we need to be able to safely connect them in a way where they will not burn out the pins that they are connected to. This is going to be done by connecting each of the R, G, and B pins on the LED strip to darlington transistors so that we can safely sink the current flowing through the LEDs to ground. This is done by connecting each of the R, G, B pins to the collector of our transistors. We then connect the bases to ground and the emitters go to our FPGA board.

 

Camera: We are going to use a camera that is compatible with an I2C communication for face detection. The camera will take an image size of 320 x 240 and send the data to our FPGA and also store it into RAM memory. This is done so that the FPGA can perform facial detection  processing on the  image. The desired frame rate for the camera will be 3 frames per second.

 

RAM/ROM Memory: The image data will be stored into RAM memory. The memory will then communicate with the FPGA via a SPI interface.

 

 

Schematic:

*Note some of the schematics are hard to read in this document because they have been scaled down. That is why we have included the schematic images in the zipFile that we uploaded to BeachBoard. Please consult these images if you are having trouble seeing the pictures.

 

 

Hardware Task List:

Task # Task Name Task Leader Additional Team Members Involved Estimated Time Needed
1. Design Power Supply Skyler Tran Jose Trejo 12 Hours
Task Description: This task consists of measuring the required power and current needed for the whole project. After determining this, we will then build these requirements into a power supply that can provide a higher than required current so that it will not burn out.
2. Connecting Servos to the Desk-Buddy Michael Parra Skyler Tran 10 Hours
Task Description: This task consists of measuring the distance of each servo needed for the whole project and also what type of wire should be used in this project. It is also responsible for making sure that the software functionality still works after the device is connected to the Desk-Buddy.
3. Balancing Servos on the Desk-Buddy Skyler Tran Victor Espinoza 15 Hours
Task Description: This task consists of measuring the balance required for the arm, so we can add the servos on and put the spring to help balancing as well.
4. Connecting LEDs on the Desk-Buddy Victor 3 Hours
Task Description: This task consists of connecting the LEDs to the Desk-Buddy and making sure that they do not negatively impact the integrity of the overall design. It is also responsible for making sure that the software functionality still works after the device is connected to the Desk-Buddy.
5. Implementing Amplifier Circuit for Speaker Jose Trejo 3 Hours
Task Description: This task consists of connecting the amplifier circuit to the speaker that is going to be connected to the Desk-Buddy. It is also responsible for making sure that sound can actually be heard coming out of the speaker after connecting it to the amplifier.
6. Connecting Speaker / Amplifier Circuit to the Desk-Buddy Jose Trejo Michael Parra 3 Hours
Task Description: This task requires the speaker to be correctly connected to the Desk-Buddy and making sure that all software functionality still works as expected after everything is connected together.
7. Connecting Camera to the Desk-Buddy Michael Parra Victor Espinoza 3 Hours
Task Description: In this task we will make sure the wiring measurements are correct, and long enough to connect from the FPGA to the Camera Module itself.  The camera has 22 pins, 18 will be for data communication, and the other 4 will be for ground and supply voltage.
8. Rapid Prototyping Implementation Jose Trejo Skyler Tran, Victor Espinoza, Michael Parra 20 Hours
Task Description: In this task we will combine all components and connect them to our board to make sure that everything is running how we want it to. We will keep testing this until our design works properly and our team is satisfied with the result. Once we finish testing the rapid prototyping design, we will then move on to making our own PCB Board and substituting that into our design to get our finished product.
9. Design PCB Schematic Skyler Tran Jose Trejo, Victor Espinoza, Michael Parra 20 Hours
Task Description: This task consists of designing a custom PCB schematic for our microcontroller and some peripheral inputs and outputs like USB, power line, a step down convert for 3.3 V, pin header, etc… To design a custom schematic, it requires a lot of time to learn and to calculate each interconnecting path on the PCB. We also need to measure how thick it should be and how much ohms it requires.
10. Print PCB Michael Parra Victor Espinoza, Jose Trejo, Skyler Tran 25 Hours
Task Description: The purpose of this task is to find a company that can aid in creating a custom circuit board for our system.  Much paperwork and funds will be needed to follow through with this design. One we find a company that can print our PCB for an affordable price, we will then actually print our PCB design.
11. Soldering Components Michael Parra Jose Trejo 12 Hours
Task Description: Alternate components besides the PCB will need to be soldered to our printed PCB to establish genuine connectivity throughout the device’s circuitry. Soldering the components will give the project durability and portability. This task consists of soldering all of the components to our printed PCB design.
12. Verify PCB Design Skyler Tran Victor Espinoza 12 Hours
Task Description: To verify our PCB design after soldering, we need to test all the peripheral inputs and outputs and all of the pins from the microcontroller. There could be many steps to verify our PCB and ensure that it work exactly as our rapid prototyping implementation. First of all, we have to make sure that the DC voltage is clean and stable. Second, we have to measure the minimum and the maximum voltage and make sure that the onboard voltage regulator is stable and does not damage the microcontroller.
13. Implement Final Design Victor Espinoza Skyler Tran, Michael Parra, Jose Trejo 12 Hours
Task Description: This task consists of making sure that we can implement our PCB Design and achieve the same functionality that we had with our rapid prototyping implementation.
14. Create User Manual for PCB Design Victor Espinoza Skyler Tran, Michael Parra, Jose Trejo 25 Hours
Task Description: This task consists of making a user’s manual for our PCB design that will describe the different components within our PCB and how they work. This document will be used by people who wish to attain a general understanding of the capabilities of our PCB and how all of the different components are connected together within our design.
15. Create Engineering Manual for PCB Design Jose Trejo Skyler Tran, Michael Parra, Victor Espinoza 20 Hours
Task Description: This task consists of making an Engineering Manual for our PCB design. In this manual, we will list all of the components and thoroughly describe them. It will have everything you need to know about each particular part in great detail showing you the voltages, resistance, operating temperature ranges, etc…  
16. 3-D Print Segments of Desk-Buddy Skyler Tran Victor Espinoza, Michael Parra 25 Hours
Task Description: This task consists of making an Engineering Manual for our PCB design. In this manual, we will list all of the components and thoroughly describe them. It will have everything you need to know about each particular part in great detail showing you the voltages, resistance, operating temperature ranges, etc…  
17. Connect 3-D Printed Segments Together Jose Trejo Skyler Tran, Michael Parra, Victor Espinoza 20 Hours
Task Description: This task consists of making an Engineering Manual for our PCB design. In this manual, we will list all of the components and thoroughly describe them. It will have everything you need to know about each particular part in great detail showing you the voltages, resistance, operating temperature ranges, etc…  

 

Software Design:

On this project, it’s not required to create a window application to operate functionally. We only design a firmware for face detection, moving servos, lighting LED, and outputting the sounds. For the firmware, we are going to program in verilog using the Xillinx ISE.

 

Facial Detection Software Discussion:  Our plan is to get the face detection algorithm to work using the Viola Jones Algorithm.  If we use the Raspberry Pi B+, than we will be coding in Python.  If we code in verilog, which is the route that we are currently pursuing, than we will use the Xilinx ISE.  Refer back to the Theory section for more information on how the Viola Jones Algorithm works.

 

Moving Servos Software Discussion: We are going to use verilog to make the pulse width modulation that is able to control each servo. To do this, we use the clock to set how wide of a pulse we want to have. We also set the speed to make the servo run faster. To to this, we have to make the pulse change faster every 20 ms.

 

Changing RGB LEDs Software Discussion:

So far, we are using the following colors in our design: gray=idle, green=happy, red=angry, blue=sad. In order to get the different color schemes, we need to adjust the value of each combination of Red, Green, and Blue LEDs accordingly. The gray color has an RGB combination of  R=128, G=128, B=128. The green color has an RGB combination of R=0, G=128, B=0. The red color has an RGB combination of R=255, G=0, B=0. The blue color has an RGB combination of R=0,G=0,B=255.  In order to select the appropriate color we need to send all three values (Red, Green, and Blue values) to the LEDs so that we can display the appropriate color by combining these values. We will use pulse-width modulation dimming techniques to control the LED strip. Pulse-width modulation revolves around controlling how wide a pulse is (how long it stays high compared to how long it stays off). Using verilog we can easily modulate the width of a pulse using clocks.

Original Software Design:

Our project consists of the following original software components:

  1. Facial Detection – we will attempt to write our own code in matlab to help us implement the Viola Jones algorithm. Once we determine the correctness of our code, we will then port it over to Verilog.
  2. Moving Servos – we will write our own code for this using Verilog.
  3. Changing LED colors – we will write our own code for this using Verilog.
  4. Outputting appropriate sounds – we will write our own code for this using Verilog.

Top-Level Software Block Diagram:

Block Diagram Description: This top level block diagram represents the general organization of what our software is going to look like. The foundation of our project revolves around a camera and using facial detection algorithms to detect whether a person is present in the image or not. Based on this conclusion and whether a face was previously detected or not, we then select the appropriate mood (Happy, Angry, Sad, Idle) and exhibit that mood by moving around particular servos, changing the LED color being displayed and outputting specific noises. This process (the camera takes a picture, stores it into memory, that image is processed to see if there is a face within it, then the appropriate mood is selected and outputted via the servos, LEDs, and the speaker) is repeated infinitely until the robotic arm is turned off. During this period, the Desk-Buddy will constantly be moving its servos around trying to detect a person’s face.

Software Flowchart:

 

*Please note that many of these flowcharts are really big and we understand that they do not fit well into this document. As a precaution, we have included each flowchart image in the zipfile that we uploaded our project report in. Please view these images so that you can actually read the different blocks and comments in the flowcharts (unlike in the images posted here because of the size of the flowcharts).

Top-Level Software Flowchart:

Overall Flowchart Description:

This is a top-level Flowchart representation of our software. As long as power is provided to the Desk-Buddy and it is turned on, it will constantly look for a person’s face by moving its servos around. Once it finds a person’s face, the Desk-Buddy then changes its mood (state) and displays this by changing its LED color and outputting a specific sound. If a face was previously detected but the Desk-Buddy no longer detects that person’s face, it will then enter the mad state for up to 10 seconds. If the Desk-Buddy recognizes a face before the 10 second mad state window is up, then it will immediately change to a happy state and disregard anything that it was previously doing in the mad state. If the Desk-Buddy stays in the mad state for 10 seconds, it will then switch to its sad state for 5 seconds. Again, if it detects a face before this time window is achieved, the Desk-Buddy will automatically disregard its current state and enter into a happy state. Once the Desk-Buddy has been sad for 5 seconds, it will then transition to the idle state, where it will stay at indefinitely until it detects a face. For the sake of not cluttering the Top-Level Flowchart, we decided to make sub-flowcharts for the Move Servos, Look for Face, Follow Face, and Output Noise/LED color blocks that are referenced in this flowchart. Each of those sub-flowcharts has its own description that describes the data flow through that specific flowchart. Each block in this flowchart also has a block number associated with it. In the Block Descriptions section under this flowchart we discuss the purpose of each block and what they do.

Flowchart:

Block Descriptions:

Begin: This block simply notifies the user that the data flow has begun in the flowchart (its purpose is for the reader and it does not have any relevance in our software design). That is why we did not assign it a number.

  1. Initialize All Servos (all Servos moved to 5 degree starting point): This block initializes all of the servos on startup and moves each of the five servos in our design to a starting point of 5 degrees. This block only occurs during the startup of the system and its purpose is to initialize all of the servos to a known state.
  2. Initialize Camera (have it to start storing / processing data to detect a face): This block serves the purpose of signaling the camera to take a picture and and process the image to see if a face was detected on the image or not. This block calls the Look For Face Sub-Flowchart.
  3. Initialize LEDs (have them change to gray color, which represents the Idle state): This block sets the LEDs to gray, which signifies that the Desk-Buddy is in the idle state. It serves the purpose of setting the LEDs on the Desk-Buddy to represent that it is in the idle state.
  4. State = 0; (idle state): This block serves the purpose of setting the Desk-Buddy to a known state upon startup (the idle state). The state of the Desk-Buddy is used to determine what LED color should be displayed and what output noise should be made according to what state it is in.
  5. Reset / Start Timer: The purpose of this block is to start and reset the timer associated with the idle state. Initially, the Desk-Buddy will look try to detect a face for 15 seconds, and if it does not detect a face within this time window, then it will go to the sad state.
  6. Initialize speaker output (make a boot-up noise to notify user that robotic arm is powered on): This block serves the purpose of outputting a boot-up noise to the speaker on the Desk-Buddy so that the user can know that it has been turned on.
  7. Face Detected?: This block is a decision block that checks to see if a face was detected or not based on the output of the Look For Face Flowchart.
  8. if (State == 1): This block follows the true path of block #6. It is another decision that checks to see if the Desk-Buddy is currently in the happy state (state 1) when it entered this path.
  9. State = 1; (happy state): This block follows the false path of block #7. If it was not in the happy state, then we change it to the happy state by changing the value of State to a 1.
  10. Output LEDS and Noise: We then need to change the colors on the LEDs to green to signify that the Desk-Buddy is in a happy state. We also need to output a happy noise to the Desk-Buddy’s speaker. These are the functions that this block is in charge of. It calls the Output LEDs and Noise Sub-Flowchart to achieve these tasks.
  11. Follow Face: This block gets executed by both the true and false paths of block #7. The purpose of this block is to call the Follow Face Sub-Flowchart. This Sub-Flowchart will take a picture and try to detect a person’s face on it. If it does detect that person’s face, then it will update the servos accordingly in order to try and get that person’s face to be in the middle of the image.
  12.  if (State  == 1): This block follows the false path of block #6. It is another decision that checks to see if the Desk-Buddy is currently in the happy state (state 1) when it entered this path.
  13. State = 2; (mad state): This block follows the true path of block #11. If a face was not detected and the Desk-Buddy is currently in the happy state, then we change its State value to a 2 (mad state). This signifies that the Desk-Buddy is mad that it could not detect a face all of the sudden (as stated before, the Desk-Buddy is very needy!).
  14. Output LEDS and Noise: We then need to change the colors on the LEDs to red to signify that the Desk-Buddy is in a mad state. We also need to output a mad noise to the Desk-Buddy’s speaker. These are the functions that this block is in charge of. It calls the Output LEDs and Noise Sub-Flowchart to achieve these tasks.
  15. Reset / Start Timer: This block resets the timer and starts it off fresh from 0. This timer is needed because we decided that the Desk-Buddy will only stay in its mad state for a finite amount of time (10 seconds to be exact).
  16. Get Elapsed Time Value: This block is responsible for retrieving the current amount of time that has elapsed. We need this value to make sure that the Desk-Buddy moves onto the next state after 10 seconds have passed. If the Desk-Buddy detects a face before these 10 seconds pass, then it will automatically enter the happy state and disregard the previous state and elapsed time in that state (It is very naive and trusting).
  17. if (ElapsedTime >= 10s): This block is a decision block that checks to see if the Desk-Buddy has reached the time limit for its mad state (10 seconds).
  18. State = 3; (sad state): This block follows the true path of block #16. If the Desk-Buddy has reached its time limit, we then move onto the next state by changing the State value to a 3 (sad state).
  19. Output LEDS and Noise: We then need to change the colors on the LEDs to blue to signify that the Desk-Buddy is in a sad state. We also need to output a sad noise to the Desk-Buddy’s speaker. These are the functions that this block is in charge of. It calls the Output LEDs and Noise Sub-Flowchart to achieve these tasks.
  20. Reset / Start Timer: This block resets the timer and starts it off fresh from 0. This timer is needed because we decided that the Desk-Buddy will only stay in its sad state for a finite amount of time (5 seconds to be exact).
  21. Get Elapsed Time Value: This block is responsible for retrieving the current amount of time that has elapsed. We need this value to make sure that the Desk-Buddy moves onto the next state after 5 seconds have passed. If the Desk-Buddy detects a face before these 5 seconds pass, then it will automatically enter the happy state and disregard the previous state and elapsed time in that state (It is very naive and trusting).
  22. if (ElapsedTime >= 5s): This block is a decision block that checks to see if the Desk-Buddy has reached the time limit for its sad state (5 seconds).
  23. Stop/Disable Timer: This block is responsible for stopping and disabling the timer so that it does not run while the the Desk-Buddy is stuck doing nothing in the idle state.
  24. Output LEDS and Noise: We then need to change the colors on the LEDs to gray to signify that the Desk-Buddy has entered its indefinite idle state. We also need to output an idle noise to the Desk-Buddy’s speaker. These are the functions that this block is in charge of. It calls the Output LEDs and Noise Sub-Flowchart to achieve these tasks.
  25. State = 0; (indefinite idle state): We then move onto the next state by changing the State value to a 0 (indefinite idle state). That is what this block is responsible for doing. When the Desk-Buddy enters the indefinite idle state, it does nothing. because it would be extremely inefficient to have the Desk-Buddy always moving. The Desk-Buddy enters this indefinite idle state after looking for a person’s face for 15 seconds and not being able to detect any faces.
  26. Input push button status: This block is responsible for getting the push-button status for the indefinite idle state. We figured that using a push button to break the Desk-Buddy out of the indefinite idle state was the easiest way to get the Desk-Buddy to change its’ state.
  27. if (push button asserted): This is a decision block that checks to see if the push button associated with getting the Desk-Buddy out of its indeterminate idle state was pushed or not. If it was, then we exit the indeterminate idle state, otherwise we loop back to block #25.
  28. Reset / Start Timer: Now that the Desk-Buddy is out of the indeterminate idle state, we need to re-enable and restart the timer (to look for a face for another 15 seconds before the Desk-Buddy re-enters the indeterminate idle state).
  29. else if (State == 2): This block follows the false path of block #11. This is a decision block that checks to see if the Desk-Buddy is currently in the mad state (State == 2).
  30. else if (State == 3): This block follows the false path of block #28. This is a decision block that checks to see if the Desk-Buddy is currently in the sad state (State == 3).
  31. Get Elapsed Time Value: This block follows the false path of block #29. If the Desk-Buddy was neither in State 1, State 2, or State 3, that means it is in the idle state. This block is responsible for seeing how much time has elapsed since the Desk-Buddy has been in the idle state.
  32. if (ElapsedTime >= 10s): This is a decision block that checks to see if the Desk-Buddy has reached its limit to search for a face while in the idle state. If it has, then the Desk-Buddy moves into a sad state (State = 3).
  33. Move Servos: This block is responsible for updating the servo positions. This block calls the Move Servos Sub-Flowchart to update the servo positions. How the Move Servos Sub-Flowchart works is that it will follow a pattern and move only certain servos at a time so that it can cover the most surface area while taking pictures and processing them to see if a face was detected or not.
  34. Look for face: This block is responsible for telling the camera to take a picture and processing that picture to see if a face was detected inside of it or not.This is achieved by calling the Look For Face Sub-Flowchart.

Look For Face Flowchart:

Overall Flowchart Description:

The Face detection flow chart uses the Viola Jones algorithm.  In the Viola Jones algorithm there are 4 steps: Haar Feature Selection, Integral Imaging, Adaboost Training, and Cascading Classifiers.  The Haar Feature Selection uses different sized black and white rectangles to help locate dark and light features of a face.  The rectangles are used to distinguish key features of a face such as mouth, eyes and nose. Integral Imaging is the computation of darkness within a certain location of the image.  Adaboost is short for Adaptive Boost, here the program is able to speed up detection by having the system avoid some pattern recognition of a specific area.  If there are no major signs of a face being in that particular location of the image then the program will skip the longer computations for an area where there are more signs of a face existing.  The Cascading Classifier ends up detecting the face.  If there is an abundance of face confirmations in a certain area then the Cascading Classifier will spend its last time to look in that area for the face to result in one detection instead of multiple detections of the same face.

Each block in this flowchart also has a block number associated with it. In the Block Descriptions section under this flowchart we discuss the purpose of each block and what they do.

Flowchart:

 

 

Block Descriptions:

Begin: This block simply notifies the user that the data flow has begun in the flowchart (its purpose is for the reader and it does not have any relevance in our software design). That is why we did not assign it a number.

  1. Signal Camera to Take picture: This block is responsible for getting the camera to take a picture. Once the camera takes a picture, the picture data is then saved to memory in the next step.
  2. Transmit Data image from camera to memory: Now that we have image data, we then save it to memory so that we can process the image to see if there is a face detected on the image or not. This block is responsible for making sure that the image data is correctly saved to the Micron memory within the Nexys2 FPGA board.
  3. Perform Haar Feature Selection: The Haar Feature Selection uses different sized black and white rectangles to help locate dark and light features of a face.  The rectangles are used to distinguish key features of a face such as mouth, eyes and nose. This block is responsible for implementing the logic behind Haar Feature Selection on the image.
  4. Create Image Integral and save to memory: This block is responsible for creating an Image Integral based on the Haar Feature Selection process in block #2. Integral Imaging is the computation of darkness within a certain location of the image. Once the Image Integral is created, it is then saved to Micron Memory.
  5. Perform Adaboost: This block is responsible for performing the Adaboost logic on the image. Adaboost is short for Adaptive Boost and this helps speed up detection by having the system avoid some pattern recognition of a specific area.  If there are no major signs of a face being in that particular location of the image then the program will skip the longer computations for an area where there are more signs of a face existing.
  6. Use Cascading Classifiers: This block is responsible for using Cascading Classifiers to detect a face. If there is an abundance of face confirmations in a certain area then the Cascading Classifier will spend its last time to look in that area for the face to result in one detection instead of multiple detections of the same face.
  7. Locate Face coordinates on image: If a face is detected, then this block is responsible for returning the coordinates on the image where the face was detected. For the sake of simplicity, we decided that the coordinate that will be returned will be the center point in-between both of the eyes on a face. In this way, we will be able to get the coordinates of roughly the center of the face.
  8. Return Results (was face actually detected or not): This block is in charge of returning the actual results of the Viola Jones algorithm. It returns the final verdict on whether a face was located on the image or not. This block also returns the coordinates of the face (block #6) if the face was detected.

End: This block simply notifies the user that the data flow has ended in the flowchart (its purpose is for the reader and it does not have any relevance in our software design). That is why we did not assign it a number.

 

 

Move Servos Flowchart:

Overall Flowchart Description:

This flowchart controls the process of moving the servos to the appropriate location. The servo movement follows a specific pattern : move the base servo all the way left/right and adjust the camera left/right servo accordingly, one the base servo has reached the maximum angle, increment the other remaining servos and reverse the direction that the base servo moves (right/left). Repeat this process until the other servos reach their maximum angle degrees and then reverse the direction (start decrementing them). This way, the Desk-Buddy is bound to detect a person’s face because it is covering the whole range of servo movements that it can withstand. If the Desk-Buddy does detect a face, it will then start using the Follow Face servo flowchart instead of this flowchart (which removes the inefficiency of checking every available servo range combination in this flowchart)

We start this flowchart off by determining whether the base servo (servo_1) has moved all the way left or all the way right. If it has, then we increment or decrement (depending on the current position of each servo) servo_2, servo_3, and Servo_4. These three servos are only incremented/decremented after servo_1 has made a complete rotation from left to right or vise-versa. We always increment/decrement servo_1 and servo_5 (the servo controlling the left and right movement of the camera). Each pass through this flowchart will result in either only servo_1 and servo_5 updating their positions or it will result in all of the servos updating their position. Each update to the different servo positions is achieved using 5° increments/decrements.

For the purpose of this flow chart, we included the maximum degree length to be 180°. In reality, however, not all of the servos will be moving up to 180° (each servo will only move up to a certain angles depending on which segment of the arm it is located on). For example, the servo at the base of the arm would move at 0 – 120 degrees, the servo at the arm segment would move at 0 – 100 degrees, the servo at the arm shade would move from 0 – 60 degrees, and the servo inside the shade helping the camera to move left or right would move from 0 – 60 degrees. These angle ranges are not final yet and will be adjusted accordingly once we are able to determine our final servo movement ranges. Some servos will not need to move as far as others and so we need to make sure that each servo is able to move to its maximum angle range. These final ranges can only be confirmed once we  hook the servos up to the Desk-Buddy.

Each block in this flowchart also has a block number associated with it. In the Block Descriptions section under this flowchart we discuss the purpose of each block and what they do.

Flowchart:

 

Block Descriptions:

*Note: There are 5 servos in our design. Each servo controls a different position on the arm. We have a servo controlling the base of the arm which will turn from left to right (Servo_1). There is another servo controlling the base appendage of the Desk Buddy, allowing it go up and down (Servo_2). The third servo helps the Desk Buddy lean forwards and backwards (Servo_3). The fourth servo helps the camera to move up and down (Servo_4). The fifth servo helps the camera to move left and right (Servo_5).

Begin: This block simply notifies the user that the data flow has begun in the flowchart (its purpose is for the reader and it does not have any relevance in our software design). That is why we did not assign it a number.

  1. s1Inc0Dec1= 0: This block is responsible for initializing the increment/decrement value for servo 1 (this value determines whether we are incrementing the servo (moving it to the left or up) or decrementing the servo (moving it to the right or down)). This block only gets executed once upon startup and is then ignored every time after that when this Sub-Flowchart is called upon. A zero value results in incrementing the servo position and a one value results in decrementing the servo position.
  2. s2Inc0Dec1= 0: This block is responsible for initializing the increment/decrement value for servo 2 (this value determines whether we are incrementing the servo (moving it to the left or up) or decrementing the servo (moving it to the right or down)). This block only gets executed once upon startup and is then ignored every time after that when this Sub-Flowchart is called upon. A zero value results in incrementing the servo position and a one value results in decrementing the servo position.
  3. s3Inc0Dec1= 0: This block is responsible for initializing the increment/decrement value for servo 3 (this value determines whether we are incrementing the servo (moving it to the left or up) or decrementing the servo (moving it to the right or down)). This block only gets executed once upon startup and is then ignored every time after that when this Sub-Flowchart is called upon. A zero value results in incrementing the servo position and a one value results in decrementing the servo position.
  4. s4Inc0Dec1= 0: This block is responsible for initializing the increment/decrement value for servo 4 (this value determines whether we are incrementing the servo (moving it to the left or up) or decrementing the servo (moving it to the right or down)). This block only gets executed once upon startup and is then ignored every time after that when this Sub-Flowchart is called upon. A zero value results in incrementing the servo position and a one value results in decrementing the servo position.
  5. s5Inc0Dec1= 0: This block is responsible for initializing the increment/decrement value for servo 5 (this value determines whether we are incrementing the servo (moving it to the left or up) or decrementing the servo (moving it to the right or down)). This block only gets executed once upon startup and is then ignored every time after that when this Sub-Flowchart is called upon. A zero value results in incrementing the servo position and a one value results in decrementing the servo position.
  6. retrieve Servo_1 position from memory: In order to keep track of the position of servo 1, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_1.
  7. Get Servo_1 sInc0Dec1 value: This block is responsible for retrieving the current  sInc0Dec1 value for servo 1. We need to know this value to know if we are incrementing or decrementing Servo_1.
  8. if (Servo_1 >= 180 Degrees) || (Servo_1 <= 0 Degrees): This is a decision block that tests whether servo 1 has reached either of its extreme value limits. If it has, that means that it cannot travel any further in that particular direction.
  9. s1Inc0Dec1 = ! s1Inc0Dec1: This block follows the true path of block #7. If servo 1 has reached its limit to travel in a specific direction, then we need to flip the polarity of s1Inc0Dec1. This allows Servo_1 to start traveling in the opposite direction. That is what this block is responsible for.
  10. retrieve Servo_2 position from memory: In order to keep track of the position of servo 2, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_2.
  11. if (Servo_2 >= 90  Degrees) || (Servo_2 <= 0 Degrees): This is a decision block that tests whether servo 2 has reached either of its extreme value limits. If it has, that means that it cannot travel any further in that particular direction.
  12. s2Inc0Dec1 = ! s2Inc0Dec1: This block follows the true path of block #10. If servo 2 has reached its limit to travel in a specific direction, then we need to flip the polarity of s2Inc0Dec1. This allows Servo_2 to start traveling in the opposite direction. That is what this block is responsible for.
  13. Get Servo_2 s2Inc0Dec1 value: This block is responsible for retrieving the current  sInc0Dec1 value for servo 2. We need to know this value to know if we are incrementing or decrementing Servo_2. This block gets executed by both the true and false paths of block #10.
  14. if (s2Inc0Dec1 == 0): This is a decision block that tests whether the s2Inc0Dec1 value is 0. If it is, that means that we should be incrementing the servo, otherwise we should be decrementing it.
  15. Move Servo_2 5 degrees up: This block follows the true path of block #13. If the s2Inc0Dec1 value is zero, we then move Servo_2 five degrees up. This block is responsible for incrementing the servo position of Servo_2.
  16. Move Servo_2 5 degrees down: This block follows the false path of block #13. If the s2Inc0Dec1 value is a one, we then move Servo_2 five degrees down. This block is responsible for decrementing the servo position of Servo_2.
  17. Save Servo_2 position to memory: Once we update the position of Servo_2, it becomes very important to keep track of the current position of Servo_2. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  18. retrieve servo_3 position from memory: In order to keep track of the position of servo 3, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_3.
  19. if (Servo_3 >= 180 Degrees) || (Servo_3 <= 0 Degrees): This is a decision block that tests whether servo 3 has reached either of its extreme value limits. If it has, that means that it cannot travel any further in that particular direction.
  20. s3Inc0Dec1 = ! s3Inc0Dec1: This block follows the true path of block #18. If servo 3 has reached its limit to travel in a specific direction, then we need to flip the polarity of s3Inc0Dec1. This allows Servo_3 to start traveling in the opposite direction. That is what this block is responsible for.
  21. Get Servo_3 s5Inc0Dec1 value: This block is responsible for retrieving the current  sInc0Dec1 value for servo 3. We need to know this value to know if we are incrementing or decrementing Servo_3. This block gets executed by both the true and false paths of block #18.
  22. if (s3Inc0Dec1 == 0): This is a decision block that tests whether the s3Inc0Dec1 value is 0. If it is, that means that we should be incrementing the servo, otherwise we should be decrementing it.
  23. Move Servo_3 5 degrees backwards (up): This block follows the true path of block #21. If the s3Inc0Dec1 value is zero, we then move Servo_3 five degrees backwards (up). This block is responsible for incrementing the servo position of Servo_3.
  24. Move Servo_3 5 degrees forwards (down): This block follows the false path of block #21. If the s3Inc0Dec1 value is a one, we then move Servo_3 five degrees forwards (down). This block is responsible for decrementing the servo position of Servo_3.
  25. Save Servo_3 position to memory: Once we update the position of Servo_3, it becomes very important to keep track of the current position of Servo_3. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  26. retrieve servo_4 position from memory: In order to keep track of the position of servo 4, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_4.
  27. if (Servo_4 >= 90 Degrees) || (Servo_4 <= 0 Degrees): This is a decision block that tests whether servo 4 has reached either of its extreme value limits. If it has, that means that it cannot travel any further in that particular direction.
  28. s4Inc0Dec1 = ! s4Inc0Dec1: This block follows the true path of block #26. If servo 4 has reached its limit to travel in a specific direction, then we need to flip the polarity of  s4Inc0Dec1. This allows Servo_4 to start traveling in the opposite direction. That is what this block is responsible for.
  29. Get Servo_4 s5Inc0Dec1 value: This block is responsible for retrieving the current  sInc0Dec1 value for servo 4. We need to know this value to know if we are incrementing or decrementing Servo_4. This block gets executed by both the true and false paths of block #26.
  30. if (s4Inc0Dec1 == 0): This is a decision block that tests whether the s4Inc0Dec1 value is 0. If it is, that means that we should be incrementing the servo, otherwise we should be decrementing it.
  31. Move Servo_4 5 degrees up: This block follows the true path of block #29. If the s4Inc0Dec1 value is zero, we then move Servo_4 five degrees up. This block is responsible for incrementing the servo position of Servo_4. Once again, we have to save the Servo_4 position to memory after we update it.
  32. Move Servo_4 5 degrees down: This block follows the false path of block #29. If the s4Inc0Dec1 value is a one, we then move Servo_4 five degrees down. This block is responsible for decrementing the servo position of Servo_4. Once again, we have to save the Servo_4 position to memory after we update it.
  33. Get Servo_1 s1Inc0Dec1 value: This block is responsible for retrieving the current  sInc0Dec1 value for servo 1. We need to know this value to know if we are incrementing or decrementing Servo_1. This block gets executed by both the true and false paths of block #20. It also gets executed by the false path of block #7.
  34. if (s1Inc0Dec1 == 0): This is a decision block that tests whether the s1Inc0Dec1 value is 0. If it is, that means that we should be incrementing the servo, otherwise we should be decrementing it.
  35. Move Servo_1 5 degrees to the left: This block follows the true path of block #33. If the s1Inc0Dec1 value is zero, we then move Servo_1 five degrees to the left. This block is responsible for incrementing the servo position of Servo_1.
  36. Move Servo_1 5 degrees to the right: This block follows the false path of block #33. If the s1Inc0Dec1 value is a one, we then move Servo_1 five degrees down. This block is responsible for decrementing the servo position of Servo_1.
  37. Save Servo_1 position to memory:  Once we update the position of Servo_1, it becomes very important to keep track of the current position of Servo_1. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  38. retrieve servo_5 position from memory: In order to keep track of the position of servo 5, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_5.
  39. if (Servo_5 >= 180 Degrees) || (Servo_5 <= 0 Degrees): This is a decision block that tests whether servo 5 has reached either of its extreme value limits. If it has, that means that it cannot travel any further in that particular direction.
  40. s5Inc0Dec1 = ! s5Inc0Dec1: This block follows the true path of block #38. If servo 5 has reached its limit to travel in a specific direction, then we need to flip the polarity of  s5Inc0Dec1. This allows Servo_5 to start traveling in the opposite direction. That is what this block is responsible for.
  41. Get Servo_5 s5Inc0Dec1 value: This block is responsible for retrieving the current  sInc0Dec1 value for servo 5. We need to know this value to know if we are incrementing or decrementing Servo_5. This block gets executed by both the true and false paths of block #38.
  42. if (s5Inc0Dec1 == 0): This is a decision block that tests whether the s5Inc0Dec1 value is 0. If it is, that means that we should be incrementing the servo, otherwise we should be decrementing it.
  43. Move Servo_5 5 degrees to the left: This block follows the true path of block #41. If the s5Inc0Dec1 value is zero, we then move Servo_5 five degrees to the left. This block is responsible for incrementing the servo position of Servo_5.
  44. Move Servo_5 5 degrees to the right: This block follows the false path of block #41. If the s5Inc0Dec1 value is a one, we then move Servo_5 five degrees to the right. This block is responsible for decrementing the servo position of Servo_5.
  45. Save Servo_5 position to memory:  Once we update the position of Servo_5, it becomes very important to keep track of the current position of Servo_5. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.

End: This block simply notifies the user that the data flow has ended in the flowchart (its purpose is for the reader and it does not have any relevance in our software design). That is why we did not assign it a number.

Follow Face Flowchart:

Overall Flowchart Description:

Once our Desk-Buddy detects a face, then it will do its best to follow that person’s face wherever it goes. This flowchart describes the logic behind this process. The Desk-Buddy will always try to keep a person’s face in the center of the images that it is taking. This means that if a person’s face is detected in the middle of the image, then the Desk-Buddy will not move. If the person’s face is detected to the left of the Desk-Buddy, it will then move its base servo to the left and move the camera left/right servo to the right (to recenter the camera). If a person’s face is detected to the right of the Desk-Buddy, it will then move its base servo to the right and move the camera left/right servo to the left. The other four places that a face can be detected by the Desk-Buddy are in the upper-right portion of the image, the upper-left portion of the image, the lower-right portion of the image, and in the lower-left portion of the image. If the Desk-Buddy detects a face in one of these portions of the image, it will then adjust the appropriate servos so that the person’s face will be as close to the middle of the camera image as possible. This process, of course, only happens when the Desk-Buddy actually detects a face. If this is not the case, then the Desk-Buddy will move around in the defined pattern using the Move Servos Flowchart to look around for a person’s face. Each block in this flowchart also has a block number associated with it. In the Block Descriptions section under this flowchart we discuss the purpose of each block and what they do.

Flowchart:

Block Descriptions:

*Block Descriptions Comments:

*Note: There are 5 servos in our design. Each servo controls a different position on the arm. We have a servo controlling the base of the arm which will turn from left to right (Servo_1). There is another servo controlling the base appendage of the Desk Buddy, allowing it go up and down (Servo_2). The third servo helps the Desk Buddy lean forwards and backwards (Servo_3). The fourth servo helps the camera to move up and down (Servo_4). The fifth servo helps the camera to move left and right (Servo_5).

*          middle up

*_________ |__________

*|  upper left  |  upper right  |

*| middle left  | middle right |  
*|  lower left  |  lower right  |

*                   |

*          middle down

*In this flowchart we look for a face and determine where on the image that face was detected (if a face was indeed detected on the image).  If a face is detected, it will always be within one of the eight different quadrants. Depending on which quadrant the face was detected to be in, the Desk Buddy will the appropriate servos to try to get the person’s face to show up in the middle of the the image. It will do this by updating a combination of servo_1, servo_2, and servo_5.

*servo_1 is updated in order to move the Desk Buddy base from left to right and vice versa.

*servo_2 is updated in order to move the Desk Buddy arm from up to down and vice versa.

*servo_5 is updated in order to move the Desk Buddy camera from left to right and vice versa.

*Note: We move servo_1 and servo_5 in opposite directions so that the camera remains centered with respect to the picture (if we move servo_1 5 degrees to the right we want to move servo_5 5 degrees to the left to balance out the picture, otherwise the picture would be completely off).

 

Begin: This block simply notifies the user that the data flow has begun in the flowchart (its purpose is for the reader and it does not have any relevance in our software design). That is why we did not assign it a number.

  1. Look For Face: This block is responsible for telling the camera to take a picture and processing that picture to see if a face was detected inside of it or not.This is achieved by calling the Look For Face Sub-Flowchart.
  2. Face Detected?: This block is a decision block that checks to see if a face was detected or not based on the output of the Look For Face Flowchart.
  3. Retrieve faceLocation on image: This block follows the true path of block #2. It is responsible for retrieving the coordinates of a person’s face so that we know which quadrant their face was located in and we could adjust the servos accordingly to try and get that person’s face centered on the image.
  4. If (faceLocation == upper right): This is a decision block that checks to see if the person’s face coordinates were located on the upper right portion of the image. Look at the comments above for a better understanding of where each quadrant is with respect to all of the other quadrants. If the faceLocation is in the upper right quadrant of the image, that means that we want to move Servo_1 5 degrees to the right, Servo_2 5 degrees up, and Servo_5 5 degrees to the left. This will move the servos in a way where the upper right quadrant becomes closer to the center of the image.
  5. retrieve Servo_1 position from memory: In order to keep track of the position of servo 1, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_1.
  6. If (Servo_1 > 5 degrees): This is a decision block that checks to make sure that the Servo_1 right limit has not been reached. This ensures us that we can still move Servo_1 to the right.
  7. Move Servo_1 5 degrees to the right: This block follows the true path of block #6. If the right limit for Servo_1 has not been reached, we then move Servo_1 five degrees to the right. This block is responsible for incrementing the servo position of Servo_1.
  8. save Servo_1 position to memory: Once we update the position of Servo_1, it becomes very important to keep track of the current position of Servo_1. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  9. retrieve Servo_2 position from memory: In order to keep track of the position of servo 2, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_2
  10. If (Servo_2 < 175 degrees): This is a decision block that checks to make sure that the Servo_2 up limit has not been reached. This ensures us that we can still move Servo_2 up.
  11. Move Servo_2 5 degrees up: This block follows the true path of block #10. If the up limit for Servo_2 has not been reached, we then move Servo_2 five degrees up. This block is responsible for incrementing the servo position of Servo_2.
  12. save Servo_2 position to memory: Once we update the position of Servo_2, it becomes very important to keep track of the current position of Servo_2. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  13. retrieve Servo_5 position from memory: In order to keep track of the position of servo 5, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_5.
  14. If (Servo_5 < 175 degrees): This is a decision block that checks to make sure that the Servo_5 left limit has not been reached. This ensures us that we can still move Servo_5 to the left.
  15. Move Servo_5 5 degrees to the left: This block follows the true path of block #14. If the Servo_5 left limit has not been reached, we then move Servo_5 five degrees to the left. This block is responsible for incrementing the servo position of Servo_5.
  16. save Servo_5 position to memory: Once we update the position of Servo_5, it becomes very important to keep track of the current position of Servo_5. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  17. If (faceLocation == upper left): This is a decision block that checks to see if the person’s face coordinates were located on the upper left portion of the image. Look at the comments above for a better understanding of where each quadrant is with respect to all of the other quadrants. If the faceLocation is in the upper leftt quadrant of the image, that means that we want to move Servo_1 5 degrees to the left, Servo_2 5 degrees up, and Servo_5 5 degrees to the right. This will move the servos in a way where the upper left quadrant becomes closer to the center of the image.
  18. retrieve Servo_1 position from memory: In order to keep track of the position of servo 1, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_1.
  19. If (Servo_1 < 175 degrees): This is a decision block that checks to make sure that the Servo_1 left limit has not been reached. This ensures us that we can still move Servo_1 to the left.
  20. Move Servo_1 5 degrees to the left: This block follows the true path of block #19. If the Servo_1 left limit has not been reached, we then move Servo_1 five degrees to the left. This block is responsible for incrementing the servo position of Servo_1.
  21. save Servo_1 position to memory: Once we update the position of Servo_1, it becomes very important to keep track of the current position of Servo_1. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  22. retrieve Servo_2 position from memory: In order to keep track of the position of servo 2, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_2
  23. If (Servo_2 < 175 degrees): This is a decision block that checks to make sure that the Servo_2 up limit has not been reached. This ensures us that we can still move Servo_2 up.
  24. Move Servo_2 5 degrees up: This block follows the true path of block #23. If the up limit for Servo_2 has not been reached, we then move Servo_2 five degrees up. This block is responsible for incrementing the servo position of Servo_2.
  25. save Servo_2 position to memory: Once we update the position of Servo_2, it becomes very important to keep track of the current position of Servo_2. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  26. retrieve Servo_5 position from memory: In order to keep track of the position of servo 5, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_5.
  27. If (Servo_5 > 5 degrees): This is a decision block that checks to make sure that the Servo_5 right limit has not been reached. This ensures us that we can still move Servo_5 to the right.
  28. Move Servo_5 5 degrees to the right: This block follows the true path of block #27. If the right limit for Servo_5 has not been reached, we then move Servo_5 five degrees to the right. This block is responsible for incrementing the servo position of Servo_5.
  29. save Servo_5 position to memory: Once we update the position of Servo_5, it becomes very important to keep track of the current position of Servo_5. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  30. If (faceLocation == middle left): This is a decision block that checks to see if the person’s face coordinates were located on the middle left portion of the image. Look at the comments above for a better understanding of where each quadrant is with respect to all of the other quadrants. If the faceLocation is in the middle left quadrant of the image, that means that we want to move Servo_1 5 degrees to the left and Servo_5 5 degrees to the right. This will move the servos in a way where the middle left quadrant becomes closer to the center of the image.
  31. retrieve Servo_1 position from memory: In order to keep track of the position of servo 1, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_1.
  32. If (Servo_1 < 175 degrees): This is a decision block that checks to make sure that the Servo_1 left limit has not been reached. This ensures us that we can still move Servo_1 to the left.
  33. Move Servo_1 5 degrees to the left: This block follows the true path of block #32. If the Servo_1 left limit has not been reached, we then move Servo_1 five degrees to the left. This block is responsible for incrementing the servo position of Servo_1.
  34. save Servo_1 position to memory: Once we update the position of Servo_1, it becomes very important to keep track of the current position of Servo_1. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  35. retrieve Servo_5 position from memory: In order to keep track of the position of servo 5, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_5.
  36. If (Servo_5 > 5 degrees): This is a decision block that checks to make sure that the Servo_5 right limit has not been reached. This ensures us that we can still move Servo_5 to the right.
  37. Move Servo_5 5 degrees to the right: This block follows the true path of block #36. If the right limit for Servo_5 has not been reached, we then move Servo_5 five degrees to the right. This block is responsible for incrementing the servo position of Servo_5.
  38. save Servo_5 position to memory: Once we update the position of Servo_5, it becomes very important to keep track of the current position of Servo_5. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  39. If (faceLocation == middle right): This is a decision block that checks to see if the person’s face coordinates were located on the middle right portion of the image. Look at the comments above for a better understanding of where each quadrant is with respect to all of the other quadrants. If the faceLocation is in the middle right quadrant of the image, that means that we want to move Servo_1 5 degrees to the right and Servo_5 5 degrees to the left. This will move the servos in a way where the middle right quadrant becomes closer to the center of the image.
  40. retrieve Servo_1 position from memory: In order to keep track of the position of servo 1, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_1.
  41. If (Servo_1 > 5 degrees): This is a decision block that checks to make sure that the Servo_1 right limit has not been reached. This ensures us that we can still move Servo_1 to the right.
  42. Move Servo_1 5 degrees to the right: This block follows the true path of block #41. If the right limit for Servo_1 has not been reached, we then move Servo_1 five degrees to the right. This block is responsible for incrementing the servo position of Servo_1.
  43. save Servo_1 position to memory: Once we update the position of Servo_1, it becomes very important to keep track of the current position of Servo_1. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  44. retrieve Servo_5 position from memory: In order to keep track of the position of servo 5, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_5.
  45. If (Servo_5 < 175 degrees): This is a decision block that checks to make sure that the Servo_5 left limit has not been reached. This ensures us that we can still move Servo_5 to the left.
  46. Move Servo_5 5 degrees to the left: This block follows the true path of block #45. If the Servo_5 left limit has not been reached, we then move Servo_5 five degrees to the left. This block is responsible for incrementing the servo position of Servo_5.
  47. save Servo_5 position to memory: Once we update the position of Servo_5, it becomes very important to keep track of the current position of Servo_5. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  48. If (faceLocation == lower right): This is a decision block that checks to see if the person’s face coordinates were located on the lower right portion of the image. Look at the comments above for a better understanding of where each quadrant is with respect to all of the other quadrants. If the faceLocation is in the lower right quadrant of the image, that means that we want to move Servo_1 5 degrees to the right, Servo_2 5 degrees down, and Servo_5 5 degrees to the left. This will move the servos in a way where the lower right quadrant becomes closer to the center of the image.
  49. retrieve Servo_1 position from memory: In order to keep track of the position of servo 1, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_1.
  50. If (Servo_1 > 5 degrees): This is a decision block that checks to make sure that the Servo_1 right limit has not been reached. This ensures us that we can still move Servo_1 to the right.
  51. Move Servo_1 5 degrees to the right: This block follows the true path of block #50. If the right limit for Servo_1 has not been reached, we then move Servo_1 five degrees to the right. This block is responsible for incrementing the servo position of Servo_1.
  52. save Servo_1 position to memory: Once we update the position of Servo_1, it becomes very important to keep track of the current position of Servo_1. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  53. retrieve Servo_2 position from memory: In order to keep track of the position of servo 2, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_2
  54. If (Servo_2 > 5 degrees): This is a decision block that checks to make sure that the Servo_2 down limit has not been reached. This ensures us that we can still move Servo_2 down.
  55. Move Servo_2 5 degrees down: This block follows the true path of block #54. If the s2Inc0Dec1 value is a one, we then move Servo_2 five degrees down. This block is responsible for decrementing the servo position of Servo_2.
  56. save Servo_2 position to memory: Once we update the position of Servo_2, it becomes very important to keep track of the current position of Servo_2. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  57. retrieve Servo_5 position from memory: In order to keep track of the position of servo 5, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_5.
  58. If (Servo_5 < 175 degrees): This is a decision block that checks to make sure that the Servo_5 left limit has not been reached. This ensures us that we can still move Servo_5 to the left.
  59. Move Servo_5 5 degrees to the left: This block follows the true path of block #58. If the Servo_5 left limit has not been reached, we then move Servo_5 five degrees to the left. This block is responsible for incrementing the servo position of Servo_5.
  60. save Servo_5 position to memory: Once we update the position of Servo_5, it becomes very important to keep track of the current position of Servo_5. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  61. If (faceLocation == lower left): This is a decision block that checks to see if the person’s face coordinates were located on the lower left portion of the image. Look at the comments above for a better understanding of where each quadrant is with respect to all of the other quadrants. If the faceLocation is in the lower left quadrant of the image, that means that we want to move Servo_1 5 degrees to the leftt, Servo_2 5 degrees down, and Servo_5 5 degrees to the right. This will move the servos in a way where the lower right quadrant becomes closer to the center of the image.
  62. retrieve Servo_1 position from memory: In order to keep track of the position of servo 1, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_1.
  63. If (Servo_1 < 175 degrees): This is a decision block that checks to make sure that the Servo_1 left limit has not been reached. This ensures us that we can still move Servo_1 to the left.
  64. Move Servo_1 5 degrees to the left: This block follows the true path of block #63. If the Servo_1 left limit has not been reached, we then move Servo_1 five degrees to the left. This block is responsible for incrementing the servo position of Servo_1.
  65. save Servo_1 position to memory: Once we update the position of Servo_1, it becomes very important to keep track of the current position of Servo_1. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  66. retrieve Servo_2 position from memory: In order to keep track of the position of servo 2, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_2
  67. If (Servo_2 > 5 degrees): This is a decision block that checks to make sure that the Servo_2 down limit has not been reached. This ensures us that we can still move Servo_2 down.
  68. Move Servo_2 5 degrees down: This block follows the true path of block #67. If the s2Inc0Dec1 value is a one, we then move Servo_2 five degrees down. This block is responsible for decrementing the servo position of Servo_2.
  69. save Servo_2 position to memory: Once we update the position of Servo_2, it becomes very important to keep track of the current position of Servo_2. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  70. retrieve Servo_5 position from memory: In order to keep track of the position of servo 5, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_5.
  71. If (Servo_5 > 5 degrees): This is a decision block that checks to make sure that the Servo_5 right limit has not been reached. This ensures us that we can still move Servo_5 to the right.
  72. Move Servo_5 5 degrees to the right: This block follows the true path of block #71. If the right limit for Servo_5 has not been reached, we then move Servo_5 five degrees to the right. This block is responsible for incrementing the servo position of Servo_5.
  73. save Servo_5 position to memory: Once we update the position of Servo_5, it becomes very important to keep track of the current position of Servo_5. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  74. If (faceLocation == middle up): This is a decision block that checks to see if the person’s face coordinates were located on the middle up portion of the image. Look at the comments above for a better understanding of where each quadrant is with respect to all of the other quadrants. If the faceLocation is in the middle up quadrant of the image, that means that we want to move Servo_2 5 degrees up. This will move the servos in a way where the middle up quadrant becomes closer to the center of the image.
  75. retrieve Servo_2 position from memory: In order to keep track of the position of servo 2, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_2
  76. If (Servo_2 < 175 degrees): This is a decision block that checks to make sure that the Servo_2 up limit has not been reached. This ensures us that we can still move Servo_2 up.
  77. Move Servo_2 5 degrees up: This block follows the true path of block #76. If the up limit for Servo_2 has not been reached, we then move Servo_2 five degrees up. This block is responsible for incrementing the servo position of Servo_2.
  78. save Servo_2 position to memory: Once we update the position of Servo_2, it becomes very important to keep track of the current position of Servo_2. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.
  79. If (faceLocation == middle down): This is a decision block that checks to see if the person’s face coordinates were located on the middle down portion of the image. Look at the comments above for a better understanding of where each quadrant is with respect to all of the other quadrants. If the faceLocation is in the middle down quadrant of the image, that means that we want to move Servo_2 5 degrees down. This will move the servos in a way where the middle down quadrant becomes closer to the center of the image.
  80. retrieve Servo_2 position from memory: In order to keep track of the position of servo 2, we need to save its position to memory. This block is responsible for getting the correct memory address and retrieving the correct data associated with the position of Servo_2
  81. If (Servo_2 > 5 degrees): This is a decision block that checks to make sure that the Servo_2 down limit has not been reached. This ensures us that we can still move Servo_2 down.
  82. Move Servo_2 5 degrees down: This block follows the true path of block #81. If the s2Inc0Dec1 value is a one, we then move Servo_2 five degrees down. This block is responsible for decrementing the servo position of Servo_2.
  83. save Servo_2 position to memory: Once we update the position of Servo_2, it becomes very important to keep track of the current position of Servo_2. This is why we save this value to memory by overwriting its previously saved value in memory with the current position value.

End: This block simply notifies the user that the data flow has ended in the flowchart (its purpose is for the reader and it does not have any relevance in our software design). That is why we did not assign it a number.

LED/Speaker Flowchart:

Overall Flowchart Description:

This flowchart corresponds to the LED color change and Speaker output. Both the LED lights and the speaker have corresponding states that go with each “personality” or “mood”. The flowchart goes through and finds what “mood” or state the Desk-Buddy is in. Once it determines the correct mood  it will then  light up the correct RGB LED combination and produce the corresponding sound output. It then will wait until the Desk Buddy switches to another state to change the LEDs and Sound. We do this because we only want the Desk-Buddy to output the appropriate noise and change its LED color once every time it enters the appropriate state. The flowchart starts off by assigning the correct startup sound and the LEDs to the correct startup color (gray). Then it will check to see if the Desk-Buddy is happy and if it is in its happy state, then it will produce a happy sound and turn the LEDs green to indicate happiness. The light will then stay green until the mood or state changes. It will do the same for the mad state producing a mad sound and red LEDs to indicate being mad. The last mood we will have is sadness. In this state, the Desk-Buddy will produce a sad noise along with changing the LEDs to blue. If the design doesn’t detect a face for a certain amount of time it will then enter an idle state which will cause the design to produce a noise to indicate it going idle along with gray LEDs. Each block in this flowchart also has a block number associated with it. In the Block Descriptions section under this flowchart we discuss the purpose of each block and what they do.

Flowchart:

Block Descriptions:

Begin: This block simply notifies the user that the data flow has begun in the flowchart (its purpose is for the reader and it does not have any relevance in our software design). That is why we did not assign it a number.

  1. Input State (from top level flowchart): This block is responsible for getting the input State from the Top-Level Software flowchart. This State variable is used to see what color to change the LEDs and what specific noise to make.
  2. if (State ==0): This decision block checks to see if the State is equal to zero (if the system is in the idle state).
  3. Select proper R, G, B values for LEDs (Gray color): This block follows the true path of block #1. It is responsible for selecting the correct RGB combination to produce a gray color (the color associated with the idle state).
  4. Output Gray color on LEDs: This block is responsible for updating all of the LEDs in the design and making sure that all of them are gray.
  5. Select appropriate sound (Idle sound): This block is responsible for selecting the appropriate sound that signifies that the Desk-Buddy is in the idle state.
  6. Output Idle Sound to speaker: This block is responsible for outputting the selected sound to the speaker connected to the Desk-Buddy
  7. if (State ==1): This block follows the false path of block #1. It is a decision block that checks to see if the State is equal to one (if the system is in the happy state).
  8. Select proper R, G, B values for LEDs (Green Color): This block follows the true path of block #6. It is responsible for selecting the correct RGB combination to produce a green color (the color associated with the happy state).
  9. Output Green color on LEDs: This block is responsible for updating all of the LEDs in the design and making sure that all of them are green.
  10. Select appropriate sound (Happy sound): This block is responsible for selecting the appropriate sound that signifies that the Desk-Buddy is in the happy state.
  11. Output Happy Sound to speaker: This block is responsible for outputting the selected sound to the speaker connected to the Desk-Buddy
  12. if (State ==2): This block follows the false path of block #6. It is a decision block that checks to see if the State is equal to two (if the system is in the mad state).
  13. Select proper R, G, B values for LEDs (Red Color): This block follows the true path of block #11. It is responsible for selecting the correct RGB combination to produce a red color (the color associated with the mad state).
  14. Output Red color on LEDs: This block is responsible for updating all of the LEDs in the design and making sure that all of them are red.
  15. Select appropriate sound (Mad sound): This block is responsible for selecting the appropriate sound that signifies that the Desk-Buddy is in the mad state.
  16. Output Mad Sound to speaker: This block is responsible for outputting the selected sound to the speaker connected to the Desk-Buddy
  17. if (State ==3): This block follows the false path of block #11. It is a decision block that checks to see if the State is equal to three (if the system is in the sad state).
  18. Select proper R, G, B values for LEDs (Blue Color): This block follows the true path of block #16. It is responsible for selecting the correct RGB combination to produce a blue color (the color associated with the sad state).
  19. Output Blue color on LEDs: This block is responsible for updating all of the LEDs in the design and making sure that all of them are blue.
  20. Select appropriate sound (Sad sound): This block is responsible for selecting the appropriate sound that signifies that the Desk-Buddy is in the sad state.
  21. Output Sad Sound to speaker: This block is responsible for outputting the selected sound to the speaker connected to the Desk-Buddy

End: This block simply notifies the user that the data flow has ended in the flowchart (its purpose is for the reader and it does not have any relevance in our software design). That is why we did not assign it a number.

Software Task List:

Task # Task Name Task Leader Additional Team Members Involved Estimated Time Needed
1 Facial Detection Algorithm Michael Parra Jose Trejo 15 Hours
Task Description: There are about 6 or 7 Face detection algorithms, what we need to establish is which algorithm to use.  The Viola Jones Face Detection algorithm seems best to be implemented into the FPGA of choice.  The face detection algorithm will use the stored image to undergo its detection process.  The purpose of this step is to fully understand what the algorithm is doing and looking for in regards to face detection.
2 Updating Servo Positions Skyler Tran Victor Espinoza 10 Hours
Task Description: This task consists of designing a function task in our software that increases and decreases the servo position by 5 degrees. Each servo will be moving individually. Each servo position will be saved into a memory location (in degrees) so that we never lose that servo’s position. We will make sure that all servos will be called and incremented individually.
3 Capture Image Data Skyler Tran Michael Parra 10 Hours
Task Description: This task consists of capturing the image data from the camera. This means that we need to know when we should tell the camera to take a picture. In order to do this, we need to measure the amount of time that it takes to capture an image so we can adjust its dimensions to a size that would allow the microcontroller to handle a high frame rate.
4. Save Image Data to Cellular RAM Victor Espinoza Michael Parra 15 Hours
Task Description: This task will focus on ensuring that the image present on the camera is being correctly saved to the appropriate place in memory (most likely to RAM of the microcontroller for easy storage/manipulation of the data and then to an external SD card). Once stored in memory, the image can then later be accessed in order to process the image and determine whether it contains the presence of a face or not and return the location of the face. This task is also in charge of keeping track of the current memory address location so that we do not accidentally override any previously saved image data that could potentially still be in the processing stage of facial detection.
5. Implement Facial Detection Algorithm Michael Parra Skyler Tran 20 Hours
Task Description: After fully understanding the Algorithm the next step would be to implement the algorithm into code.  What shall be discussed here is the microcontroller used, timing, and programming language.  Depending on microcontroller used,  the team will decide which program is best to implement the on-board the algorithm.  In this step we will focus on debugging the facial detection code and using various pictures to see if the code actually works.  Here we may decide whether it’s best to use openCV for the facial detection or not.
6. Process Image Data Michael Parra Skyler Tran 16 Hours
Task Description: In this step we will combine the facial detection software with the processed image taken from the embedded system’s camera.  The focus will be debugging and fixing compatibility issues with the Facial Detection software, camera, image file extensions, and timing constraints.  Timing constraints will help us decide how fast we can capture images with our camera while running the face detection system.   
7. Creating Emotion Sounds Jose Trejo Michael Parra 15 Hours
Task Description: This task will focus on writing the code associated with generating the different sounds that our design will have. We will have different sounds playing at different frequencies depending on what mood the Desk-Buddy is in. The Desk-Buddy will have 5 different sounds that it will produce. One sound will be on startup when the design is powered on. Then it will also have sounds to demonstrate its mood such as happy, mad and sad. Besides its mood and startup sound it will have a sound when it goes into the idle mode.
8. Outputting Appropriate Sounds Jose Trejo 10 Hours
Task Description: In this task we will take the different sounds we generated from the previous task and assign them to their correct pattern and sequence. Each sound will be assigned to a “personality”. This will help determine the state in which our design is in. In this way, the Desk-Buddy will be able to express different sentiments by communicating with the user via sound.
9. Selecting LED Color Victor Espinoza 10 Hours
Task Description: This task consists of selecting the appropriate LED colors. So far, we are using the following colors in our design: gray=idle, green=happy, red=angry, blue=sad. In order to get the different color schemes, we need to adjust the value of each combination of Red, Green, and Blue LEDs accordingly. The gray color has an RGB combination of  R=128, G=128, B=128. The green color has an RGB combination of R=0, G=128, B=0. The red color has an RGB combination of R=255, G=0, B=0. The blue color has an RGB combination of R=0,G=0,B=255.
10. Outputting Appropriate LED Color Victor Espinoza 8 Hours
Task Description: This task consists of making sure that the appropriate LED color is displayed on the LEDs based on the Desk-Buddy’s mood. Each mood that our Desk-Buddy can exhibit (happy, sad, angry, idle) will result in the LEDs displaying an appropriate color. When the Desk-Buddy is happy, the LEDs will all be green. When the Desk-Buddy is sad, all of the LEDs will be blue. When the Desk-Buddy is angry, all of the LEDs will be red. Finally, when the Desk-Buddy is idle, all of the LEDs will be gray.
11. Make Sure States are switching correctly Victor Espinoza 10 Hours
Task Description: This task consists of making sure the different states are displaying the correct results (right LED color and right output noise) and that they are switching to the next state correctly. This task is also responsible for making sure that the Desk-Buddy only stays in the Mad State for up to 15 seconds and that it only stays in the Sad State for up to 5 seconds.

 

 

Gantt Diagram:

*Note: We realize that this image is really hard to view in this document since it is so wide (it spans the course of the 15 weeks that we have for next semester). That is why we have included the image file in the zipFile that we have uploaded to beachboard as well. Just some quick notes about the diagram colors:

 

 

  1. All of the tasks that are colored green correspond to the tasks that Michael Parra is responsible for.
  2. All of the tasks that are colored gray correspond to the tasks that Skyler Tran  is responsible for.
  3. All of the tasks that are colored blue correspond to the tasks that Victor Espinoza is responsible for.
  4. All of the tasks that are colored yellow-orange correspond to the tasks that Jose Trejo is responsible for.
  5. The Date above the days corresponds to the day that Sunday would land on (for example the in the week with Jan 24 above the days, the 24th corresponds to a Sunday, which is also the beginning of the week).

Costs:

 

Qty Price Description Part # Supplier
5 $20 Hi Torque Servos MG966R eBay
1 $20 Spartan 6 XC6SLX9-2TQG144C eBay
1 $6.50 SDRAM 256Mbit 16Mbit x 16 MT48LC16M16A2P-75IT eBay
1 $1.24 8Mb SPI Flash SST25VF080B-80-4I eBay
1 $6.95 FPGA PROM XCF04S eBay
1 $2.65 3.3V 1.5A VRegulator LT1086-3.3 Digikey
1 $4.02 100MHz 3.3VOsc. HCMOS/TTL CTX318LVCT-ND Digikey
1 $2.12 USB to UART   MCP2200-I/SO-ND Digikey
3 $0.76 1ea 22uF 6.3V capacitor 445-1595-1-ND Digikey
2 $0.53 1ea 4.7uF 10V capacitor 587-1379-1-ND Digikey
5 $0.74 10ea .1uF 6.3V capacitor 709-1009-1-ND Digikey
5 $0.06 10ea .01uF 16V capacitor 490-1525-1-ND Digikey
1 $0.16 10ea 470nF capacitor 490-1548-1-ND Digikey
6 $0.02 1ea 0ohm resistor jumper P0.0GCT-ND Digikey
3 $0.02 1ea 100ohm resistor P100GCT-ND Digikey
2 $0.02 1ea 75ohm resistor P75GCT-ND Digikey
2 $0.02 1ea 300ohm resistor P300GCT-ND Digikey
2 $0.56 1ea 5K trimpot 3302W-502ECT-ND Digikey
3 $0.02 1ea 330 resistor P330GCT-ND Digikey
2 $0.02 1ea 4.7Kohm resistor P4.7KGCT-ND Digikey
1 $0.02 1ea 3.9Kohm resistor P3.9KGCT-ND Digikey
1 $0.02 1ea 470ohm resistor P470GCT-ND Digikey
1 $0.56 1ea blue LED 160-1647-1-ND Digikey
1 $0.52 1ea green LED 160-1435-1-ND Digikey
1 $10 LED Light Strip SMD5050 RGB Adafruit
1 $11.96 Camera 640×480 OV7670 FIFO AL422 CMOS eBay
1 $1.85 Mini Speaker 8 ohm 0.5 W KS-3008 1898 Adafruit
3 $5.00 Complementary power Darlington transistors TIP120 Fairchild

Above is a list of all major components that are going to be used in creating the Desk-Buddy. The total cost of the components comes out to $96.35. To develop the first Desk-Buddy it will take us about 800 hours. These hours are broken down in the Hardware Task List. This is only for the very first production of the Desk-Buddy. After this the time should be cut in half considering since we would have the engineering manual and other task completed that can be used again. So to produce 1000 units it would take a total component cost of  $96350.00 and to produce those 100 units it will take approximately 400,000 hours. We estimate that it will cost about an additional hundred dollars or so to create our custom PCB (we will create 5 of them just in case some of them are defective). The cost for the final product with our custom PCB added to it will cost us around $300 (with building time taken into account as well). If we are mass-producing our custom PCB board, then this price will become significantly cheaper and there will be a higher return on investment.

Conclusions:  

As we have been discussing the main problem is implementing the facial detection algorithm.  To meet our hardware design requirement we are trying to use the FPGA, because it is a HDL.  We were once going to use the Raspberry pi and Arduino together to get our project to work but we have noticed this doesn’t really meet the hardware/software design requirement. Therefore we have switched the Arduino controls to the FPGA. We don’t believe it will be that hard to implement, but we have yet to find out.  Furthermore, there is a concern on implementing the face detection algorithm in verilog. We have read many published articles about this and believe this might be slightly over our capabilities.  It seems as if implementing the Viola Jones into Verilog is more of a Graduate project rather than Undergrad.  Given our team of smart hard workers we accept the challenge to do this, but in reality we am not sure if this is possible.  We have talked to many teachers of this subject such as Ms. Pouye Sedighian, Mr. Joshua Hayter, Mr. Darin Goldstein, and Mr. Todd Ebert.

Ms. Pouye has done work at UCLA with the Face Recognition algorithm using matlab and OpenCV.  She has guided our group into looking for a face detection algorithm rather than recognition.

Mr. Joshua Hayter has worked with FPGA’s in image processing and object detection.  

Mr. Darin Goldstein has told us that the computational time for face detection is not efficient to run on a 1GHz processor or less.  He claims that we will need a faster processor, and if we do use the Raspberry Pi we might only be able to process 1 or 2 frames a second using the Pi’s 900Mhz Processor.  Mr. Darin Goldstein referred us to Mr. Todd Ebert, but due to his high volume of students during office hours, it seems almost impossible to meet with him.

With an FPGA we have the ability to increase efficiency because we have the capability of designing our hardware to run in parallel, rather than wait on the processor to execute one instruction per clock.  Mr. Hayter says he used the MicroBlaze processor softcore in his object detection project.

Another concern is being able to balance all of the servos in a way that they can move the Desk-Buddy and remain stable. We have been looking into ways to better balance the servos (using 3-D printed parts so that the overall weight the servos have to maintain is reduced significantly, adding springs to the arm that act as counter-balances, and using high-torque servos. We know that it is going to be hard being able to balance and move all of the servos together, but we will hopefully prevail in the end.

Appendix A:  

FPGA: Spartan 6 XC6SLX9-2TQG144C

SDRAM: 16Mbit x 16 MT48LC16M16A2P-75IT SDRAM SSOP54

 

ROM: 8Mb SPI Flash  SST25VF080B-80-4I 8 pins

led: 1 RGB SMD5050 LED Light Strip (5 meter spool)

transistors: 3 TIP120 Complementary power Darlington transistors

 

Speaker ; Breadboard-Friendly PCB Mount Mini Speaker – 8 Ohm 0.2W

 

2.5V 1A VRegulator         MCP1826S-2502E/DB-ND

100MHz 3.3VOsc. HCMOS/TTL   CTX318LVCT-ND

Appendix B:  

Links to the full data sheets:

http://www.xilinx.com/support/documentation/data_sheets/ds162.pdf

 

http://arm9download.cncncn.com/datasheet/MT48LC16M16A2.pdf

 

http://ww1.microchip.com/downloads/en/DeviceDoc/25045A.pdf

 

http://e-radionica.com/productdata/RGB5050LED.pdf

 

https://learn.adafruit.com/rgb-led-strips/schematic

 

https://www.fairchildsemi.com/datasheets/TI/TIP122.pdf

 

https://www.adafruit.com/datasheets/P1898.pdf

 

 

http://www.farnell.com/datasheets/37964.pdf

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s