IT Chronicles Podcast: How Visual AI Adds Data and Real Insights To Any Image

Chooch AI is a visual AI platform that can process real-time images and extract relevant metadata. Emrah Gultekin CEO and founder of Chooch AI explains how their solution solves a fundamental problem in Visual AI, the ability to acquire visual expertise in a structured way similar to human knowledge. Originally published by IT Chronicles.

Carlos Casanova: Hello and welcome to IT Chronicles 10inTech. I’m here with my co-host, Kathleen Wilson. Hey Kathleen.

Kathleen Wilson: Hello.

Carlos Casanova: And Shane Carlson. Hey Shane.

Shane Carlson: Hey everybody.

Carlos Casanova: And our guest is Emrah Gultekin from Chooch AI. Welcome Emrah.

Emrah Gultekin: Hey everybody.

Carlos Casanova: Hey Emrah so, glad you could make it to the call today, the conversation. But it sounds like, you know, we were chatting earlier, and it sounds like your company’s doing some really cool things in computer vision. Why don’t you tell us a little bit about it, for those that aren’t really definitely with, you know, what exactly computer vision is, and then we’ll kind of get into it a little bit more detail?

Emrah Gultekin: Yeah sure. Great to be on. First of all computer vision is a part of AI, its artificial intelligence, and you know we’ve been working on computer vision for decades now in different forms, and what we do as a company is we copy human visual intelligence of the machines. So if the way that you see things and how you tag them, how you see them and tag them we copy that into a machine, so that we’re basically proliferating that person, so if you’re a bio medical expert and you’re counting cells and you’re identifying cells in microscopy, we take that capability that you have and put it into a machine and proliferate it, so just creating those types of efficiencies is basically what we do with computer vision.

Shane Carlson: Excellent. So, as I was perusing some of the information you guys have out there about your various products and offerings. A couple things caught my eye, I mean first and foremost is the concept of doing some of this visioning out on the edge on IOT devices and things of that nature, and then a lot of different industry focuses. Talk to me a little bit about what has changed in the world over the last five or ten years, to allow you guys to start doing this complex AI video imaging analysis, out on devices that are, you know, out there in the field, maybe on a factory floor, you know, maybe in a security line, things like that, I mean what’s happening that it allows you guys to be able to do that type of processing and calculations on those devices, where that was a huge challenge, not too long ago.

Emrah Gultekin: Great question, Shane. So, you know that AI has been around in terms of theory for a long time, 40-50 years, in the making. We knew how to do these types of regressions back then mathematically, but actually putting them into real life, putting them into machines and having it work at scale in real time has been a real challenge. And some of the things that have develop over the years have been the GPU servers, so being able to actually have these powerful machines on the cloud and right now recently on the edge has been a real revolution and the second thing which is a software part of it which is relying on deep learning frameworks, like TensorFlow, MXnet, Gluon, you’ve got Pytorch, Karros and these types of deep learning frameworks became available to the IT ecosystem. And that, these are very recent developments, four-five years, and so that has made it easy to easier to deploy, but it’s not the end-all, I mean there are still lots of components that we’re working on and we think this is, you know, this is probably a 20 to 30 year development cycle that we’re in right now.

Carlos Casanova: So Emrah, you know, Shane rattled off quite a few different industries, interestingly enough my, you know, many years back, I actually was hardware design engineer started in with an organization that did real-time live image processing, so it was the old-school hardware design convolutional filters, multi order, you know, convolution stuff, so just kind of you know hearing you talk about this now is partly bringing me back to good memories and some very scary ones, at the same time. It sounds like obviously it’s advanced considerably which one would expect, I mean I was focused predominantly in a military space. Is your, are you seeing any particular industry take this on more aggressively, than others?

Emrah Gultekin: Yeah, great question. So, this is like the Internet in 1994-1995, I mean like everyone has to adopt it at some point, but who is who’s adopting first, we’re seeing a lot of attraction the healthcare industry, security and public safety and also government, which is, you know, a big one with geospatial, satellite and drone imagery. These are the main ones that we’re seeing fractions on, but we’re also seeing traction and retail and that’s also interesting recently so, a lot of this is, it’s basically like a new it’s a new platform, for every vertical actually, but some verticals are going to adopt it quicker than others.

Kathleen Wilson: So, I have a question. I have a few questions. But one of the things is like how quickly can some of these organizations realize the benefit from AI, like is it are we looking as like a six month, you know, and I hate to use the word return on investment, but it’s like more value realization? So, you know, when adopting AI, how quickly have organizations you’ve worked with realized some benefit?

Emrah Gultekin: So, that that’s been the challenge with these types of things and being able to develop them and deploy them in these enterprises. We’re talking about deployment periods of as quick as one week to deploy and use this type of AI and some more complex use cases, it could take three or six months to deploy it. So, our duty as practitioners is to make it easier and easier for these enterprises to be able adaptive. And that’s really been our focus, but you’re right Kathleen, it’s been, it’s a difficult process and what we’re seeing is, it’s not even part of their workflows. So, it’s not like they’re replacing something with something else, it’s that they don’t actually have anything in place right now and you have to kind of disrupt that workflow and that’s also a challenge. So, it’s taking some time, it’s not happening overnight, but nothing this grand and that this scale happens overnight anyway.

Kathleen Wilson: Well I was surprised about how quickly the implementation timeframes were, because I you know looking back to you know a few years ago we’re talking six months was a minimum for most organizations to implement new technology. Sorry Shane you had something.

Shane Carlson: No, I was just gonna ask, you know, what type of serve real-world problems are you guys helping businesses and your consumers solve with this technology and what are some good examples?

Carlos Casanova: Yeah Shane I’m glad you asked this, that’s exactly I was gonna say and I think for someone kind of new into a space it’s like a how exactly would I use this, you know if I’m in retail how exactly what I use it, in healthcare whatever so yeah if you can give us some examples I think that’ll be great.

Emrah Gultekin: Yeah so something that everyone understands is like facial authentication. So we need to understand who that person is immediately and then customize what you’re doing based on that, so that’s a simple one to understand but we’re doing lots of complex things as well like understanding what’s happening in a surgical theater so when an operations happen you know all the steps, you know what’s happening and you can send alerts if something goes, if there’s an anomaly, or you can also share best practices with the data later, and we’re doing well acquire alerts from space that’s very practical we’re doing a lot of security flare detection, leak detection and things that you know humans usually do very well visually we’re just training AI to do it so that humans don’t need to do it and you scale it basically.

Shane Carlson: Yeah, you know, I’m thinking of the old Amazon Mechanical Turk you know that they had out a few years, I think they actually still have it, where they would pay people small incremental bits of money to do visual identifications off say a satellite imagery or other things I mean I could go out there and put a Turk out there and have people work on it and do things and you’ll have a budget for how much I spend and this these are things that we had had humans doing because they were too complex at the time or weren’t cost-effective to have computer models trained to do the recognition, but at the same time the humans were teaching those models and patterns to the AI that was behind the scenes and it seems like collectively over the last five or ten years we’ve been able to take those models and build on them and make them you know way better way faster way more effective than they’ve ever been. What do you see kind of as the future of where this is going and is this gonna be kind of an everyday part of our lives would you say kind of going forward?

Emrah Gultekin: Yeah, at the end they were the masters of the AI so it’s important that we stay that way and that we annotate and label images and label data properly I think one of the problems that we’re facing a lot is bias in these data sets so even if you have like a Mechanical Turk doing something like that you need to be careful because it develops inherent implicit bias into these models. It is becoming our part of our lives every day right now and it’s going to become more and more a part of it as we move forward. We’re seeing that you see that with like Google assistant or you know being able to have… AIs have the courage to suggest now, so they’re suggesting you know certain types of sentences or there are certain types of tags and you’re seeing that more and more we’re going to see that throughout our lives.

Kathleen Wilson: Well Emrah, thank you very much for joining us today what I was really excited to hear especially is like AI and retail because that’s one area where you know I think we could really benefit having a little bit more AI in there. So, again thank you very much for coming to share a little bit more about Chooch and the AI capabilities of your company and we look forward to speaking to you again in the near future thank you.

Emrah Gultekin: Thank you very much.

Carlos Casanova: Thank you Emrah.

Shane Carlson: Thank you Emrah.


Learn more about AI Vision.

Reach out to our team today about the benefits of AI Vison from Chooch.

See how it works