Webinars

The Edge AI Presentation on AI Day

This presentation by Michael Liou was part of the Open Systems Media AI Day, Deploying Vision Systems with AI Capabilities. Because of the proliferation of edge devices, computer vision is one of AI’s killer apps in the form of edge AI.

Here’s the transcript. And go here for the entire AI Day Presentation.

I want to, well, kind of kick off our presentation by talking a little bit about who we are at Chooch AI and then talk about how Chooch AI thinks about Edge AI and Edge deployments and then walk about through a couple examples and some closing thoughts.

So for those of you who don’t know who we are, we’re a Silicon Valley-based computer vision company, and we’ve developed a very horizontal platform that detects not only objects and images but also actions such as coughing, sneezing, walking, running. We can process traditional visible spectrum imagery as a video, but we also do thermal, infrared, CT scan, x-ray, multispectral satellite, any type of imagery that where you use human subject domain expertise to actually help analyze a particular image. We generate predictions both in the cloud and on the Edge and then we’ll be focusing most of our discussion on Edge today. And as I mentioned now, we’re a horizontal ready-now platform.

And our secret sauce at Chooch AI is the ability to generate models very, very quickly. If you think about the development cycle, typically for AI models, typically runs anywhere from 9 to 12 months. And those bottlenecks occur basically at the dataset generation which is a very manual and tedious process and the model training itself. Our platform has been able to generate models as quickly as 24 hours, depending upon a dataset and diversity of training data that clients provide for us.

As you can see, we’ve developed solutions across multiple verticals. They range from manufacturing to retail, healthcare, oil, and gas, and we have Fortune 500 as well as the US government as clients. And lastly, we are deployable both on Intel CPU as well as Nvidia GPU architecture. We have a couple other processors on our roadmap, but given the large TAM that Intel and Nvidia provide, essentially, we’re hardware agnostic.

So these days, everyone’s talking about digital transformation and Edge AI, I think, plays a very, very important role. If you think about all the data’s currently being created these days, 90% of all data out there has been created literally in the last two years and almost 80 to 90% of this is all unstructured. And our belief is that computer vision and especially on the Edge can actually help transform what we call dumb cameras into sensors, right, and so therefore removing the need for RFID or motion sensor or thermal and simply just use a simple camera that has 10 ADP out there and transform that into an Edge AI device. And this obviously can provide a lot of structured insights to the data.

Now, Edge AI deployments, we feel, can perform a number of different functions and especially important for what we call mission critical. What would be a definition of a mission critical application? It would be things like wildfire detection, right, things like a hospital surgery. You don’t want to say, “Internet unstable,” right, in the middle of a procedure. Things like airport security while you’re looking for bad guys or weapons detection. These are what I call mission critical type of applications or so.

Another reason why you want to actually have Edge AI is data gravity, and data gravity, well, we define it in two ways. One is just a sheer amount of data that’s being generated on the Edge right now. It’s impractical to actually stream that to the cloud. Now, it’s expensive. You may have bandwidth issues, right?

The other issue is that some of this data is very, very heavy, right? Some of these files and videos are gigabytes and gigabytes and gigabytes in size, right? So why not bring the processing to where the data lives as opposed to streaming to the cloud, do the analysis, and bring it back.

It goes without saying that privacy and protection of data these days is paramount, right? The regulations all across the US, as well as around the world, mandate that that people have control of their data. So if you are ingesting your video on the Edge and you’re analyzing the video on the Edge, on your own hardware, on-prem, then it’s up to you in terms of what you do with that. You could pixelate faces. You could throw away the data. You can just keep the metadata. You can put it back into your private databases, but it’s critical that people have control.

This could not only applies to Edge inferencing and predictions on the Edge but can also apply to the other two parts of the three pillars that constitute computer vision. And that’s including the dataset generation, as well as a model training, right? There are a number of enterprises out there that in possession of very, very sensitive data, right? They don’t want it in a third-party cloud, and they want to be able to trade proprietarily on-prem, right, and this is a capability that we’re actually going to be… we’re starting to roll out and deliver to our clients starting next month.

In addition, the model training is also quite complex. And so if you can actually deliver not only the predictions but data set generation and model training on-prem, you now have a completely closed air gap system on premise without having to worry about some of the issues that I’ve mentioned before.

So cloud, as I mentioned, has its conveniences but obviously has a fair amount of drawbacks. We touched upon network connectivity. You’ve got latency and bandwidth and, of course, you have costs but imagine the application when it comes to, well, remote monitoring, right? There are applications that we’ve developed to help monitor things remotely where communications to the cloud or to the internet is compromised due to a lack of infrastructure or very, very weak just due to lack of technology. Use cases would be wildfire detection, right, in remote areas, monitoring utility equipment, right, where there’s probably not even 2 or 3G in some of the more remote areas, whether it be electrical or water or that type of infrastructure, right, or you’re doing high-speed manufacturing, right, where you can’t afford to have the data stream through your internet up through the cloudstack and back down to then render a prediction. You need that on-prem compute.

The other thing that you need to think about when you’re actually exporting models out to the Edge is that you want to be very thoughtful of the architecture and here at Chooch AI, Edge implementation is not an afterthought. It’s actually one of three key pillars to our overall platform. So not only do you need to make sure that your models are robust and accurate and fast and containerized, but they need to be lightweight, right? So typically, when models are developed, they actually get to quantize and optimize out to the Edge but usually use a little bit of speed, a little bit of accuracy. So this is pretty important in ensuring that the models are robust and working not only on the cloud but also, on Edge and where they need to be lightweight. It’s really not that economical if only one camera can run one model, right, and that one stream can hook up to one CPU or GPU and generate an inference.

So we’ve actually developed a, what do you call, a general inference engine at Chooch AI that actually then takes an other models that we can actually deploy, and we’ve developed the technology to actually deploy models and basically allocate them by stream and by GPU and CPU dynamically for our clients. And this results in the ability to manage thousands of cameras, hundreds of GPUs and CPUs, managing and deploying dozens of containerized models and then the ability to handle all that data and predictions. So if you think about all these different factors, it’s actually a lot more complicated than just putting a model on a camera and just saying that we detect that UPS dropped off a package at your front door. And as I touched upon earlier, we are now delivering fully air gap and orchestrated deployments that take all of this into account.

So with that, I want to kind of walk through just a couple of solutions that we have delivered where all required Edge deployments, right? Well, one is in healthcare. We have developed the technology to actually power what we call a smart OR, and our technology is actually helping drive a suite of tools in the OR that actually relieves the IT burden within there. And the actions that can be detected in the OR can range from the more mundane, such as when the patient is wheeled in or when a surgery team walks in, to more complex like when anesthesia starts or stops and then, of course, tracking various different instruments and objects entering and exiting the patient’s body cavity.

This technology can not only relieve the IT burden in keeping track of everything but also help reduce the amount of retained object that happens during procedures which naturally improves patient outcomes, reduces error rate, lowers readmissions. And by the way, for those who don’t know, readmissions and procedures are eaten by the hospital and not covered by insurance. And of course, reduces costs overall. So this is one application that we developed to one Fortune 500, and we currently have, I think, five ORs up and running right now.

Next example is high-speed manufacturing. We have actually developed and finished a project recently with a major global bottler where we’re actually ensuring that bottle caps are actually installed securely and properly onto the product on high-speed lines. This particular bottler experienced a product that actually reached the retail market where there’s a faulty bottle cap which resulted in moldy product. And as a result, people tweeted about it. It had hit the nationwide sales. And of course, the company itself had reputational damage. And lastly, they had to eat a very expensive recall in the process as well.

So manufacturing, we’re seeing quite a bit of interest in detecting defects and as well as anything ranging from imperfect goods to scratches. We’ve seen that use cases in arc welding for factories. There’s a lot of places where AI on high-speed assembly lines could definitely take advantage of Edge computing here.

Last example where we currently have a lot of interest is industrial worker safety. If we think about it, most companies have a warehouse or a logistics center, right? And as you drive around cities and in urban areas, there’s plenty of malls and the commercial as well as residential construction going on, right? And what we’ve developed is a suite of tools to ensure that workers are wearing their hardhats, safety glasses, safety vests, as well as gloves, right, and this helps improve worker visibility and safety, right? Prevent a number of injuries. I believe 2.5 million people hit the ER every single year from workplace injuries alone, right? We’ve developed technology to help prevent people from being in particular areas or count people who are performing certain types of tasks, ensuring that at least two or three people perform that task versus one lone cowboy. And this technology is designed to reduce injury frequency as well as reduce injury severity as well too.

It also is helpful in reducing fraud and bias and ultimately when your worker’s comp insurance is up for a new renewal, potentially lower your overall insurance costs. Matter of fact, Edge AI computing can help catalog and forensically create a whole database of all the various different incidences out there and therefore, be able to present that data to insurers to potentially be granted that insurance discount.

One area that we’ve seen a tremendous pickup of interest in is lately how telco 5G can potentially impact computer vision and Edge computing. We’ve had multiple discussions with a lot of the players within the telco space, as well as the people who are offering orchestration fabric out there. And they have a strong interest into seeing how 5G as well as cable, by the way, can potentially be utilized in computer vision.

Think about a smart city or think about an intersection that has traffic lights. It would be impractical, right, to put a small CPU or GPU device on every single traffic light. It would be also impractical to drop a server on every single block. Well, this is where 5G might come in and be able to play. Well, we can actually stream that video with a very, very high speed and very, very good bandwidth and potentially stream that to a near-edge data center or server that has all the models, all of the applications that you need to run.

So we’re kind of redefining now what Edge is. We’re actually bringing cloud to the Edge with the power of the Edge without the disadvantages of cloud. And what this enables people to now think about is, “well, if I want to run 5G in my store, if I’m mom and pop and now want to count people and see the demographics and see who look at stockouts, for instance, I don’t have to procure my own hardware anymore. I literally could just stand up a few cameras in the areas of interest and then literally stream that video to a near-edge data center using 5G, inference all the models and do all the calculations that I need and then get the analytics back almost instantaneously.”

And this is what some of the players in this space are kind of looking at. It can range anything from automatic payments at gas stations to a smart city deployments to looking for traffic fires, smoke, any type of conditions that might that cause public harm. So I think we’re at the very fairly early stages of this, but we’re working closely with a number of different telcos as well as a players out there to kind of establish this type of a framework and find out which type of applications would be of most interest to start rolling out to what folks.

I’ll close that as you think about Edge AI, you want to be mindful of a few things. One is the technology ready and ready now? I think that the landscape has been littered by a lot of false promises and hopes over the years. And so as you start vetting AI vendors, you want to make sure that the technology works. I know it sounds kind of obvious, but there’s a lot of great marketing material out there. You want to consider do you want a horizontal platform that can address multiple needs, not only in the front end of the store but the back end of the store and also looking at other issues such as safety and security, or do you want to look at a monolithic model which may do a fine job, but then when you start expanding your needs across the corporation and enterprise for digital transformation, do you have to hire another ISV and another ISV and then think about integrations, right? So if you integrate across one platform, you’re going to save yourself a lot of time versus integrating with 5, 8, 10, 12 different players out there.

As I mentioned earlier, we run both on Intel as well as Nvidia. You want to think about the hardware dependencies as well, too. Scoping this stuff out is not easy because what ends up happening is you start thinking about the art of the possible, right? So we tell people just focus on your key use cases, right? Think about the ROI that those key use cases will be able to deliver and then once you kind of get that up and running, think about the other use cases that might be able to help your overall enterprise, unless, it goes without saying, that we can’t do this ourselves, right? I mean, we are a software company. We’re not in a hardware business, but we are happy to partner with a lot of the folks, the OEMs, the camera manufacturers, the chip manufacturers, and the global system integrators. And in this way, I think this is the way that we think is the best approach to actually deliver best of breed enterprise solutions to our clients.

So with that, I do want to conclude to say thank you so much. I hope you guys learned something.

For more information, visit Chooch Edge AI or learn about Edge Devices.

Share
TwitterLinkedIn

Learn more about AI Vision.

Reach out to our team today about the benefits of AI Vison from Chooch.

See how it works