Subtitles section Play video Print subtitles My name is Garrett Souza. I'm a Course 6-3 computer science major. I'm doing a project analyzing the effect of visual media on implicit biases. I'm involved in the fashion magazine on campus. I'm a very visual learner and I've been more and more interested in how people are actually perceiving the images I take and also how I'm consuming images. I think nowadays we have Instagram and Facebook and you're seeing thousands of photos a week, and I was wondering, "How is that affecting me implicitly?", "How has me looking at twenty images portraying someone that looks like them in a certain way?", "Has that impacted how I then interact with this person in real life?", and my guess is "yes," but that's what I really wanted to study. Affective computing is really geared toward how people can naturally engage with computers. Any device that can measure some sort of affect or emotional response. So we use cameras that can detect micro expressions in the face, that can signal happiness, sadness, fear. And then there's also wrist sensors that can detect electrodermal activity that will signal stress and anxiety. And then also eye tracking, so like where are you looking on a screen? What parts of an image are specifically piquing your interest? Elections, for example; when a news outlet is reporting on a specific political candidate, what are the images they are choosing to broadcast that candidate's position, and how are those images biased based on the news outlet's own preferences? And then how is that impacting the millions of people that are watching? The ideal outcome of this project is to have a quantitative analysis of, "If this is how the media you consume on a day-to-day basis, this is actually impacting you." I think that would lead to a lot of people being more cognizant of what they're consuming, and also what people are producing. My hope is that this computational thinking is used, keeping in mind the implications of technologies that are being created; keeping in mind the biases within the approaches. It's trained on faces, that's how we generate neural networks. And if those faces are all white or Caucasian, it's so much worse at detecting faces of darker-skin people. How is that impacting the neural networks of face-tracking software that the government is using in web-cameras across our nation? Are there biases present in that? It's a really satisfying feeling when you talk to someone and you fundamentally believe that they, not like agree with you, not anything, that they just understand you. And that you aren't misconstruing your words, because...even language is so hard. It's so hard to like actually say what you're trying to say. "An image is worth a thousand words", or whatever. Like, it's such a cliche statement, but I think it has some merit in that visuals are so much easier to have a broad robust array of things that you can interpret and convey. It's really hard to go through life if you feel like you're not really being seen. And I think that's where the desire for creativity comes in. At least for me.
B1 US outlet computing neural detect consuming tracking Seeing, believing, and computing 50 1 jbsatvtac1 posted on 2019/08/22 More Share Save Report Video vocabulary