Computer Vision

AI Model Deployments For Edge AI with the Chooch AI Platform

In this 15 minute presentation, Emrah Gultekin, CEO of Chooch AI, presents how the Chooch AI platform ingests visual data, trains the AI, and exports AI models to the edge. This allows scalable inferencing on the edge on any number of devices from any number of cameras. A transcript of the presentation is provided below the video.

Hi, everybody. So today we’re going to be talking a little bit about Edge AI and how that is performed, so let me go ahead and share my screen. So mass deployment of AI models on the Edge, that’s what this is about today.

Basically, what we do here at Chooch is there are three components that make up the entire system, and one is the dashboard, and that’s the cloud account that you have. And that’s really crucial because that’s where you create the account, that’s where you select your pre-trained models, you can actually train new models on the account, you can add devices, and so forth. So that’s one part of it.

The next part is the device itself on the Edge, which is usually an NVIDIA device. And then the third component is the camera, so any type of imaging that’s coming in. So the camera’s associated with the device and that’s where the inferences are done, and you’re able to manage all that on premise and on the Cloud.

So here’s an example of what the output looks like on any of these devices and cameras that you have, so safety vests, hard hats, and whatever you basically train it for. And so these are outputs that are saved on the device, and you can create alerts, or you can create SMS messages or email messages, depending on your use case. And you could aggregate all this information and generate the reports as well.

AI Model Deployments for EdgeSo if we look at AI as a whole, in terms of the different areas and the different types of things that you need to do to make it work properly, we’re looking at three main areas, and that’s dataset generation, which is the first bit of it, that’s the most crucial part of it in terms of starting out. And then the second part is training, over here. So that’s where you create the models. And then inferencing, and that’s when you have new predictions coming in, so you have new data coming in and it generates inferences which are predictions of what it sees.

So this is like the cycle of it, and then the inferencing goes back into dataset generation as well. So if you have new types of information coming in, new types of data or video streams, it’s important to feed it back into dataset generation to refine the model and also update it, or basically, maybe train new classes or new models as well.

So the device is really crucial here because that’s where it is on the Edge, and you have a device and a camera that looks at the area, and then it does inferencing. So what’s important here is to be able to put all these streams onto the devices. And the reason is the network load is very low, there is no network load on the device, obviously, so you don’t have to send anything to the Cloud. The second issue is privacy, everything stays on the device. And a third is speed, it’s two milliseconds per inference. So it’s far faster than anything that you’re going to do on the Cloud.

We have many, many devices and many models, and you can manage these devices and the models from your dashboard.

And then the camera is associated with the device. So you create a device, and then you add cameras to it, and you can add multiple streams to any of these NVIDIA devices. So let’s start with dataset generation AI training. That’s really, really crucial over here, just how we do it.

So on the dashboard, what you do is you first, it depends what you’re doing, so it’s facial image or object. Object is the most complex, so we start with that. Here, you create a dataset. Let’s do that. So it’ll ask you to upload images or videos, so you can upload images or videos to create your dataset, and it asks you if you want to do bounding box or polygon annotation. Annotation is a way to label what’s inside of that image or inside of that video. So we’ll go into some examples for that as well.

Here’s let’s say a raw image of what you want to train. And what you start doing is basically doing bounding boxes. And if it’s polygon, then you would do segmentation. Here you would do it, and name what you’re looking at, so it could be “hardhat”, it could’ve been “red hardhat” and then security Avast and so forth. So you basically do this manually. If it’s an unknown object in the dataset, it’ll start giving you these.

So you upload these, you annotate them manually. If it’s something new, you have to do it manually. If it’s a known object that the system already knows, it provides you with suggestions, so it creates a dataset. And here you are, 141 images of a hard hat and 74 security Avast.

So this would be the raw images, so you would have raw annotations here. And then what happens in the back is this would be augmented by about x18 images in order to enrich the dataset. So it changes it, augments it in the backend.

Here you then create a perception. So you go back and you say, “Hey, We have the dataset, now let’s create the perception,” which is the model. And you name your perception, then you select the dataset, you can reuse these datasets obviously, for different types of models that you’re building. And then it starts training it. And then you can see the log of what’s going on. And then it’s actually trained.

And here, you can do a test via upload and test your new perception, your new model, and then basically provide feedback to the model. And it’ll generate also an F1 score with it. Here, you can see the JSON response, so this is the raw JSON with the class title and the coordinates of what you’re looking at, what it predicts.

And here’s the F1 score. This is an accuracy score. So the model generates automatically the accuracy of those particular classes. But that’s not enough, because what you need to do is go back and check it as a human. And this is done manually, pre deployment usually, or after deployment sometimes it’s done as well. And what you want to do is you want to be able to have an F1 score which is above 90%. And that’s what this is about. You’re able to download this and actually test many images of it.

So device deployment and camera management, this is also crucial. So let’s say you’re using a pre-trained model or pre-trained perception, or you’ve kind of trained your own thing. You want to deploy these onto the Edge so that the inferencing is done on the edge.

And here you have the device that you want to generate, so you go onto other devices, you create the device, this is office device, device for whatever office here, and it’ll create the device, right? It’ll have a device ID on it. And then what you want to do is you want to add cameras to it, right? So you have your Jetson line or your T4, and then you want to add a camera to it, or cameras, multiple cameras. You add the camera, name it, the RTSP feed as well, and then you select your perceptions that you want to put onto this device.

So let’s say it’s the hard hat one, or whatever, fall detection. You add these to the device, and you could see that here, it’s added to the device, and boom, it starts working. So what we’re really doing here is training on the Cloud and then deploying the model onto the Edge, pre-trained as well, or you want to deploy something that you’ve trained, it doesn’t really matter. But you’re able to push it out onto the device.

And the device actually syncs up with your Cloud account anytime it has connectivity. You can use it without connectivity, obviously, but you can use it with connectivity as well, and it’ll sync automatically if you want it to sync automatically with the Cloud account, if you’ve trained, retrained, a model, or you’ve done something new, or just basic system updates that you might have.

So this is an example of masks and no masks.

This is an example of social distancing.

So you could put all these onto the Edge so that they work exclusively. So basically, what happens with the Edge is you don’t stream anything to the Cloud and in doing so it works 24/7 without any type of burden to the network. And it’s also very, very expensive to do that on the Cloud.

This is people counting. This is fall detection. These are examples of anything that you want to train, or you want to use anything that’s pre-trained, you’re able to do that. Fire detection.

And you’re able to select the pre-trained and deploy them immediately, if you want to use any of the pre-trained stuff. But you might have a use case where you want something specific and you would work with us so that that becomes trained. And we do the training very, very quickly, depending on the data that our clients provide.

So here you have many, many devices, and you can manage all these devices and all the models on them remotely, and it does the inferencing on those devices.

So, thank you. If you have any questions about Edge AI, please out to us, [email protected], and we look forward to keep working with with the ecosystem. Thank you.

So what’s crucial here is to be able to deploy these models and perceptions on the Edge, on multiple devices and through multiple cameras. And that’s what this is really about. And to be able to manage those at scale.

So to be able to do that, we built the system which is the dashboard where you have your models, and where you have your devices, and where you manage those devices and cameras. And then physically, you need to have these cameras hooked up to the devices, whether any of the Jetsons or on-prem, such as the T4s, and to be able to manage these and to be able to update these at one time. So to be able to train something new, deploy it on multiple devices, scale it, and also to be able to retrain it and to have them synced.

So AI is not about static models. It’s about dynamic models, and also dynamic situations where you have these different devices out there with different types of camera angles, different types of cameras, and so forth. So be able to do this at scale and to be able to manage all of it, and that’s what we’ve done as a company is to provide you with the Chooch AI platform in order to deploy these very, very quickly, you’re able to do the Docker, download the Docker, set up in two and a half, three minutes, and then basically scale it out depending on what your use case might be.

So it’s really important that we recognize what this is all about. This is all about efficiency, and to be able to do these at scale, and to be able to do it very quickly. And that’s what we’ve done as a company.

Thank you for listening to this. This is all about the Edge, being able to do the inferencing on the Edge, being able to deploy these models, deploy these devices on the Edge, and to be able to provide that type of inferencing and that type of data. And we look forward to continue working with the ecosystem here.

Share
TwitterLinkedIn
Categories
New Gartner Report! Discover how AI-powered vision systems can provide real-time monitoring for workplace hazards. Access your free copy →