Bridging the Cyber Physical World with Robotics and Smart Sensors - Geekcamp 2020

Published on: Tuesday, 29 September 2020

Skip to talk at 1:26 • Q&A in the description

GovTech’s Digital Operations Smart Services (DOSS) platform uses deep learning to develop smart sensors and autonomous robotics. The robot dog SPOT and smart thermal scanner SPOTON were both developed on the DOSS platform. Jia Yi will share his experiences developing DOSS and plans for the platform.

Chong Jia Yi is a Distinguished Engineer at GovTech with deep technical expertise in simulation and animation.

Slides at: https://drive.google.com/file/d/1oDpKL8FFEns2GVTttKJgYAbiAJKw4uXo/view?usp=sharing

-
Q: Hi Jia Yi, you mentioned about computing on the edge, what is the payload you use on spot to complete the edge computing?
A: our code is cross platform but currently we can run it on either the NVidia Xavier or just a regular embedded i5 device

Q: do you have an example of embedded i5 device?
A: just a regular NUC is good enough.

Q: NUC with GPU? Is it possible to share more on the hardware specs?
A: For the NUC, we do not have a GPU on it. You can use something like an intel compute stick to accelerate DNN inference.

Q: is the code open source on github?
A: not yet, but it may be in the future

Q: Vision only has many catastrophic failure modes. Why take this path?
A: We don't just run it on pure vision. In the first video we use traditional LIDAR/SLAM approaches for autonomy. But Vision Autonomy is the general direction the industry is moving towards. Comma.ai, Tesla etc. are all pure vision based. Also, LIDAR is expensive and not scalable.

Q: Hi Jiayi, what kind of tests does your team conduct before bringing the robot to the physical environment?
A: We do extensive testing within a controlled environment in our office. We run our stack in our custom framework end to end which allows us to stress test every part of the code. Even when we bring it to a physical environment, the first thing we do is to turn on a ""physical simulation"" switch which allows us to see what the intentions of the robot are without actual motor actuation. This allows us to determine if our robot has learned correctly what to do before any real motors are activated.

-
Visit https://geekcamp.sg for more information about GeekcampSG

Organization