![]() And all of that is ultimately going to serve the broader goals of these companies which is create more robots for the home and all of this data is going to ultimately help them reach those goals. And that has to do with some of the broader goals that iRobot has and other robot vacuum companies has for the future, which is to be able to recognize what room it's in, based on what you have in the home. It’s why these machines no longer drive through certain kinds of messes… like dog poop for example.īut what’s different about these leaked training images is the camera isn’t pointed at the floor…Įileen Guo: Why do these cameras point diagonally upwards? Why do they know what's on the walls or the ceilings? How does that help them navigate around the pet waste, or the phone cords or the stray sock or whatever it is. Plus, they recognize certain objects on the floor and avoid them. Jennifer: These days, the most advanced robot vacuums can efficiently move around the room while also making maps of areas being cleaned. But they really wouldn't tell us what that meant. They terminated their contract with Scale AI, and also said that they were going to take measures to prevent anything like this from happening in the future. Meaning these machines weren’t released to consumers.Įileen Guo: They said that they started an investigation into how these images leaked. Jennifer: When Tech Review got in contact with the company-which makes the Roomba-they confirmed the 15 images we’ve been talking about did come from their devices, but from pre-production devices. But for whatever reason, iRobot chose not to go either of those routes. Or they could have actually done the data annotation in house. And so their work process would be a little bit more controlled. They could have gone with outsourcing companies that may be outsourced, but people are still working out of an office instead of on their own computers. Jennifer: But there’s more than one way to label data.Įileen Guo: If iRobot chose to, they could have gone with other models in which the data would have been safer. You know, for self driving cars, it's, it's an image of a street and saying, this is a stoplight that is turning yellow, this is a stoplight that is green. But to make all of that data useful for machine learning, you actually need a person to go through and look at whatever it is, or listen to whatever it is, and categorize and label and otherwise just add context to each bit of data. Eileen Guo: The most useful datasets to train algorithms is the most realistic, meaning that it's sourced from real environments.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |