Podcasts

AI Journal Interview with Jon Hazzard: The Next Wave of AI: Fast Computer Vision Deployment

In Episode 8 of The AI Journal Podcast, hosts Kieran and Tom Allen welcome Jon Hazzard from Chooch AI. Computer vision is now a reality. You can use computer vision in your business and personal life for an increasing range of applications. Chooch AI has a platform that enables your business and team to do video and image processing in as little as 20 milliseconds, along with the fantastic benefits of improved safety and quality.

Tom Allen:

Welcome back to another episode of the AI journals podcast. Today, we are joined by the amazing Jon Hazzard from Chooch AI. In this session, we’re going to be looking at computer vision and all what happens within this amazing, emerging world. So to kick us off, I’d like to introduce you to Jon and Jon, could you please give us a bit of a background to who you are, where you come from and what you’re doing with Chooch AI and looking forward to having you on this show?

Jon Hazzard:

Yeah, absolutely. It’s great to meet you guys and, and I appreciate you inviting me to this show. My name is Jon Hazzard. I am the director of go to market strategy here at Chooch AI. My way of background, I joined Chooch AI roughly four months ago, coming over from Dell technologies. And before that EMC, where I worked in our computer vision group there, safety, security, and computer vision. My role at Dell spent everything from operations to finance, to sales, to sales strategy. And for the last three years of my career there, I spent it in our safety, security and computer vision group, working with companies like Chooch AI and helping them go to market strategies and computer vision strategies into enterprise customers. So my role here at Chooch is very much the same. It’s really understanding what the marketplace is around computer vision, how we can bring computer vision and video analytics into enterprise customers, and really just helping build a company.

Tom Allen:

Yeah. Love it as well that you had an experience with Chooch beforehand. I’d love to know, Jon, what was it that kind of drew you towards Chooch? Was it to do with their origin story of what they’re coming, because I remember speaking with Jeff and Emer and it was just fantastic to hear what you’re doing at Chooch. I’d love to know how your story came into Chooch in the past four months. What’s the background stuff?

Jon Hazzard:

Yeah. Yeah. Absolutely. So while I was at Dell technologies, I was working very closely with NVIDIA and really trying to align the Dell and NVIDIA strategy to find the best computer vision, what we called ISVs out there, in order to take to the Dell sellers. And really some trying to find companies that made it really easy to adopt computer vision inside of enterprise customers so that our sellers had a see message to bring that to them. So through that process, I probably met somewhere between 25 and 30 different computer vision companies that were in some form or another of their business strategy. Anywhere from a one person startup to a company that’s been in business for 15 or 20 years, kind of playing in AI and IOT and computer vision altogether. What interest me about Chooch was they saw the problem that I ran into a lot in the past, and that was the speed and I wish they can deploy and create new models.

Jon Hazzard:

So in the computer vision world, because it is so new, because it is so bleeding edge, many times, things are going to pilot or they’re going to a phase one. And oftentimes those development cycles are extremely long, somewhere between nine to 12 months in order to develop new models. Chooch was able to do very customized models in a short period of time, where we’re talking about days rather than the months that was taking everyone else. So just that value proposition alone, I saw that there was a tremendous benefit there. So I had the ability and the opportunity to work with Chooch for probably about seven months or so before my eventual joining the company back in June.

Kieran Gilmurray:

I love that method of try before you buy. I really like that. Some people won’t actually understand what computer vision is, Jon, but we know there’re tremendous uses. I’ve seen different tech, which is sorting out on a tomato sorting line, the ripe from the non ripe tomatoes. Give us a little bit of an idea as to what areas you’re working and what is computer vision, and what’s the business case for it as well.

Jon Hazzard:

Yeah. Computer vision, it’s a newer term, but the idea behind of it isn’t necessarily new. So it’s really taking video analytics to the next level. So, for years, there’s been these AI initiatives inside of companies, there’s been IOT initiatives inside of companies. And many times those were kind of related to a sensor, something that was a non-visual sensor. And the reason behind that is because that sensor didn’t really transmit a lot of data, right? You would only need the metadata that was pretty specific to what that use case was. And then you’re able to drive insights out of that. And that’s still pretty much dominating the AI marketplace. However, with some of the advancement in technology and the ability to kind of extract even more data from a video sensor, the idea behind computer vision kind of came to light.

Jon Hazzard:

We always used to say and we still say it here, but at Dell, we would always say that the video or a camera was actually the first IOT sensor. It was a sensor, a device at the edge that was gaining information and gathering information and transmitting that back in order to make insights. It’s not always viewed as that way. And I think it’s starting to, but because it’s the same way that we learn the camera as a device, as an IOT sensor is probably the most impactful that we’ll have inside of our business and inside of the marketplace as well. So computer vision really captures that video stream, that visual sensor, and brings that data back in order to gain some sort of insight out of it.

Jon Hazzard:

And that can be across a number of things. And it could be across a number of different verticals. It can be use cases like you said, sorting tomatoes, or it can be basic things like safety and security. It’s really up to the customer on what they understand about their business, where they think they can drive additional insights, and then how a camera can be used inside of that standard operating procedure, in order to be a tool that can help them accomplish what they’re looking to do.

Kieran Gilmurray:

And what’s the best example you’ve seen in your business experience? One that really you went, wow, that’s fantastic. I know I want to be in this industry.

Jon Hazzard:

Yeah. It’s interesting because the applications are really endless. I mean, one of the cool ones that we’re seeing here is actually around cows. For some reason, we found this little niche marketplace inside of cows and basically looking at the human to cow interaction. And what I found is that being able to drive or being able to charge additional 10 cents per gallon of milk, or per a 100 gallons of milk is extremely impactful to a company’s bottom line. And because of many of the ways that kind of that milk can be compromised or even any of the dairy products can be compromised, is kind of the human to cow interaction. That’s one of the use cases that we’ve actually seen from a number of different companies actually. And one that I was just like, wow, this is an industry that I did not see myself getting into and really spoke to the wide ranging application of computer vision and how it can mean so many different things for so many different customers.

Tom Allen:

I love it as well because I remember saying to Jeff, it’s where I started off and that was my first kind of introduction to AI, I suppose, Jon. It was looking at machine vision and we’d use it on big logistic line, automation lines. And it was kind of funny because we’re having this conversation when I got introduced to Chooch to what you were doing. I said, I had no idea all this was happening. You don’t picture it, whether it’s spotting gloves or as you were saying with cows and understanding all these areas, that machine vision can just spot it, right. It can do such a better job than the human eye. There’s no error, there’s no faulting. And it’s one of those areas, when you look at AI or emerging technologies, it’s quite easy to miss, but you realize from these kind of conversations, how impactful it is and how background it can have. So I guess I’d love to ask you, where do you see this going? Where do you see computer vision being used in the next 5, 10, 15 years? Where do you see its role?

Jon Hazzard:

Yeah. I think we’re kind of at this precipice of really going into another industrial revolution. So if you take a look at what happened at the turn of the 20th century, we started to use machines in different ways that really helped us kind of push things forward. And at that time there was always this fear that these machines were going to put a lot of people out of work, but in actuality it just kind of changed the way that work was being done. So we’re using machines to do menial task that people really didn’t want to do, that was unsafe or inefficient. And I see computer vision kind of functioning in the same type of revolution. Whereas it’s not necessarily going to put people out of work, but the work itself is going to change. By using computer vision and to do things like inspection, to do things like safety and security, it becomes much more efficient, which allows people to kind of do other things and do things that are more impactful to the business.

Jon Hazzard:

So I don’t see it actually taking jobs. I actually see it shifting jobs. And we are still kind of in that bit of a period of skepticism on what this actually means for their business. When you hear artificial intelligence, you think of the movies and stuff that these robots are going to come in and kind of take over our lives. And that’s not really the case. I mean, artificial intelligence is a bit of a misnomer because even from a computer vision standpoint, it is still pretty rudimentary. Very hard to do, but it’s still pretty rudimentary. It requires rules. It requires oversight. It requires training in order to kind of get to the point where we want to be. And at Chooch, that’s what we do really well.

Jon Hazzard:

We actually try to train these computer vision, these video analytics models to replicate what a human would see and what a human subject matter expert would see. So when we talk about defect detection on bottles and going down a manufacturing line, what we’re doing is, is trying to take the information that someone who is looking at that every single day has. And when they look at a bottle, they understand that, hey, this cap’s off or the cap’s not on the right way. And they can actually correct that in order to make sure that the stock doesn’t go out and there’s no loss from reputation or a loss of product. We’re training things to do the same thing just more efficiently and probably at a higher level. So artificial intelligence, computer vision, they really still are tools that we can use as humans, rather than human replacement objects.

Kieran Gilmurray:

But machine vision is hard to get right as well, isn’t it, Jon? Because we’re talking about all the examples in the use cases, and you mentioned 25 companies at the beginning. One man, one woman bands to multinational corporations. What are the things that’s actually causing some of the challenges to machine vision, getting to be a mass adopted technology and how are you guys leading the pack?

Jon Hazzard:

Yeah, it’s a great question. It really is. And it is hard and many companies try to make it sound easy because they want it to be easy. One of the things that that interests me in Chooch is that there’s a lot of companies out there that can tell that they can do everything or they can really do similar things to what Chooch can do. But one of the drawbacks there, is the time that it takes to deploy these things. And because of that time, people tend to try to solve for the lowest common denominator. So we go and we’re talking to a business and they want to do something specific, like defect detection or understanding when people are not wearing a hard hat inside of their shipping facilities or not wearing vests or making sure that forklifts are staying in the right zones.

Jon Hazzard:

Most companies, because of the difficulty on creating AI models in to do that, they try to create them capturing the lowest common denominator. What I mean by that is they’re probably somewhere between 85 and 90% accurate because they can’t train on data. They’re just using third party data. They’re trying to bring things in and trying to understand what is going to capture most at the market. At Chooch, we do it a little bit differently. Yes, we do use some pre-trained data. So we do bring in some data in order to pre-train these models. But what we actually do is, is we actually continuously iterate on these models in order to refine them using our customer’s own data.

Jon Hazzard:

So what I mean by that is if we deploy a solution to do, say hard hat detection, right? That’s one that we see quite often in shipping facilities and retail facilities and warehousing, whatever it might be, construction sites, we see hard hat and high visibly vest quite often. So we could train a model and to recognize hard hats, high visibility vest using images on the website or pulling down videos. But that’s not going to really capture what these types of alerts should look like inside of our customers facilities. So after we deploy a model, we’re continuously iterating on that and continuous refining it using our customer’s own data so that we’re not stopping at that production level. Once it’s sent out, we’re not stopping. We’re continuing to retrain and retrain in order to get it as accurate as it can be.

Tom Allen:

Maybe we can help our audience out here and the listeners to this. Can we just get a top level view from you because I’ve got an understanding, but I’d love to say it’s someone new coming to you, Jon, from Chooch. What does it look like from maybe the techniques that are used in computer vision? What do they look like? So we’ve got things like image classification or things like object detection or segmantic segmentation. What would you say to someone trying to understand these and understanding how they go into their business?

Jon Hazzard:

Yeah. One of the things that we often see is kind of that image recognition. When people try to understand what our business is and what we do and how we train, I always give the example of when you log into a new website and you have to click to try to find the images that have the stoplights in there and the motorcycles or whatever it might be. And what you’re doing is actually helping that website or you’re helping Google, whoever it might be, train their image recognition models, right. So that’s what they’re doing. They’re taking that data and they’re actually using people through these captures in order to kind of do that. That’s not what we do here at Chooch. We have patent technology that is contextual in understanding the overall environment, that’s inside of the image.

Jon Hazzard:

So what that means is, if we’re pointing our camera already by inside of a room and that room has a dining room table, it has dining room chairs, it has a chandelier above the table, it’s not going to just point each one of those things out. Yes, it’s going to show chandelier. It’s going to show dining room chairs it’s going to show a dining room table, but it’s also going to recognize the scene at which it’s occurring. I was going to say, based on these parameters that I find, I actually think we’re in a dining room inside of a residential location. And it’s providing that context, which is really important because you want to be able to understand if you have cameras in an amusement park and say, Disney world, you understand the difference between having Mickey Mouse on a mug versus on a t-shirt versus in a six foot costume sending [inaudible 00:16:15]. You want to be able to understand that and also put it together and say, I believe we’re at Disney world based on what we’re looking at.

Jon Hazzard:

So it’s really not image classification, it’s contextual. And it really does look to mimic the way that our brain thinks. So it’s much smarter than the image recognition. It also understands actions. So if someone is falling down, it understands that action. If someone has a sentiment on their face where they look angry, or they look happy, it understands that as well. So it’s much deeper than image classification and because we can build off of all of those different, that general perception, that’s what allows us to build these models a lot quicker, because we can easily recognize a scene in which they’re occurring and then kind of piece that together into a developing new models.

Tom Allen:

Love it. And I’ve got to ask you. I know Kieran’s going to giggle here, but I’m obsessed with VR and augmented reality and virtual reality. So I’ve got ask you, Jon, there’re some crossovers here and maybe you and Chooch are cooking up some ideas in the kitchen with how to apply it but there’re some crossovers. I’ve got nothing they’re up here, but I’ve got gifted pair of Google glasses. And commercial side, Jon, here and I remember ranting to you about how awesome and obsessed I was with it. I’ve got to know from a personal point of view, where do you see this going? Where do you see it helping? Because it was showing me ideas of on a cooking line, but I’d love to know to your kind of application with Chooch and maybe just what your general consensus of building this into our virtual glasses, because now I want to pair the new Ray Bans. And I was saying, I want some for holidays. I can record, I can take phone calls. I’m obsessed to it. I’d love to know what your view is with your experience in that area, Jon.

Jon Hazzard:

Yeah. The consumer aspect of it is obviously massive and it’s not something that we go down quite often, but we have had conversations with a lot of those companies that are in that marketplace in order to bring our general perception into those types of devices. We do have an app. It’s called Chooch IC2. It’s available on the Apple store as well as the Android store. Where you can download it and just kind of run it through our deep perception engine. You can just point it around your room and it’s going to recognize different things inside of your room and kind of tag it. So it gives you about 1% or less than 1% of what our overall platform can do. That kind of gives you an idea of what we could do, if we were to launch something that was in the consumer market, including Google glass and other things.

Jon Hazzard:

In addition to that, one of the other areas that we’re focused in is kind of in the digital signage space, which basically will allow us to do or allow companies to do smart advertising. So as you’re walking towards a digital signage in a stadium or in the mall or wherever else you are, it actually can recognize the age, what people are wearing, the type of style they have and actually push out an ad that’s specific, or they think would be specific to that demographic. So if they’re walking through and they see a group of teenagers, they might put an ad up for say the latest Jordan shoe, or if you’re walking in and you’re a couple and you’re a little bit older, you might be served an ad for a BMW or something like that. So there’re different things that we can do that does kind of capture that consumer base, as well as a digital advertising. That’s a little bit smarter than what it is now. It’s just kind of on this repeating cycle. Yeah.

Kieran Gilmurray:

Tom, I can see you’re walking past that smart digital board and the next minute you and your partner, they start offering engagement rings and all sorts of dangers.

Tom Allen:

Well, I’ve seen it before. We’ve got one in Birmingham, the John Lewis building. And it was to your point, Jon, it was one of the first ones where the text people, and it knows what kind of ads to show at a consensus. So I was thinking, what is your viewpoint on maybe the challenges with machine vision and the hot topic at the moment, data collection, because it’s huge? I mean, the UK released their AI roadmap strategy from an AI powerhouse and everything’s looking at regulation and data. It’s all a machine vision, as you said. It collects a lot of data. It’s how to use that data. And I think Kieran, did we hear the stat? It was 92 or 93% of data is unstructured at the moment. And it’s just growing more and more. What’s your thinking on that, Jon? How do you think that’s going to change?

Jon Hazzard:

Yeah, there’s a lot of noise in the data, right. And I think in addition to the skepticism that I talked about, or the fear that this is going to take over jobs, there’s also a fear that this is going to kind of intrude upon our security and our privacy. And I think that’s something that we always have to kind of consider. One of the things that I like to do here, is I like to do a lot of research on AI ethics and talk to AI assistant to make sure that what we’re doing and what we’re being asked to do is still ethical, because I think that’s really important for us as a company to understand as we grow and as we start seeing the applications get more and more wide ranging. So it’s something that we have talked about here, actually creating an ethics board that we can kind of bounce these ideas off of, because I do think it’s a very important in order to protect people’s privacy.

Jon Hazzard:

What I will say about what we do specifically is that, in most cases, we’re not recording any personal information for the most part. And I think this is probably pretty wide ranging and most of the AI companies that I have come engaged with, most of it is just metadata, right? So it might say that, it sees Jon Hazzard walking down the street. He’s a tall African American man with a beard, but it is just going to say tall African American man with a beard surfing up this ad, right? So it’s actually just capturing the metadata. There’s really nothing personal there. However, there are industries where it does get a little bit more personal when we talk about our applications in healthcare. And some of the things that we’ve been pulled into doing with smart [inaudible 00:22:19] and looking in hospital settings to make sure that no one is falling out of bed.

Jon Hazzard:

At that point, we are getting into some of the personal data. So there’re things that we have to do in order to protect that. From a Chooch perspective, what we also have the ability to do is actually deploy our entire solution on the edge. And now most companies can do inferencing on the edge, which basically means that they’re training models in the cloud. They’re creating those data sets inside the cloud, and then they’re pushing those to some sort of edge device. And they’re actually doing the inferencing, which is actually the video stream coming in, understanding what those tags are, understanding what’s happening in that scene and then spitting back out those analytics and having that loop. That’s all done on the edge because it really has to be when you’re talking about video because of the bandwidth issues that you have.

Jon Hazzard:

However, what I want to say, our entire solution can be moved to the edge. That means that we can actually take our entire platform, including all of our data training, augmentation, segmentation tools, put that all in an air gap solution and actually have our customers being able to train their own models on the edge without that data ever leaving their premises. So it’s something that we can do and I don’t believe there’s anyone else there in the market that can actually have that layer of security, where we can put our entire platform out there on the edge, in an air gap solution.

Tom Allen:

I’d love to know as well. What would you say a new person coming into machine vision and understanding what are your shares and understanding how they need to use these technologies? What would you give them as either a question or a piece of advice? So picture it’s a new business and they’re thinking, oh, we’ve got a new construction site and we don’t know whether we need this technology, or we’re an airport and we don’t know whether we need this technology for security, or back to Kieran’s great point being on tomato line and they don’t know if they need technology. What would you say is the starting steps or the question to ask, Jon?

Jon Hazzard:

Yeah. I think the first question I always like to ask people is what’s your standard operating procedure now? What are you doing today and what does that process look like? Because if we can understand what that process looks like today, we can understand where computer vision and Chooch can be a tool in order to help you do that more efficiently. It’s not going to be a fix all. And it’s also not going to be something that’s going to be able to solve everything. However, what it is, it is a tool that can help you do things much more efficiently and at a greater scale and at a greater pace. So what I always usually ask is what’s your standard operating procedure today. And from there, we can understand what the use cases could potentially be and then how we can use computer vision to actually implement or push into those standard operating procedures in order to get something done.

Jon Hazzard:

In addition, there’re other places where you might see as a profit center. What type of analytics would you like to have about your customer or about your product that you think would help you drive more revenue? If you understand that, then we can also understand how we can help you get there, but you really have to have a really good understanding of what your business is, what your workflows are in order to understand where computer vision can fit into there.

Jon Hazzard:

And what I will say is I think the first piece of that understanding how we can augment and help augment standard operating procedures and standard processes, I think that’s where we’re seeing the, I guess, the bleeding edge adoption of computer vision. The profit center is there as well and trying to drive additional profits. But it’s really more about at this point, saving money, keeping people safe and really employing customer and employee satisfaction. I think that’s where we’re seeing the greatest adoption and the quickest adoption and the profit center would be a little bit secondary at the moment.

Kieran Gilmurray:

Yeah. You imagine how many decisions we make though with our eyes every single day. You look inside of a firm or mention, do we have happy staff and unhappy staff, overwork staff or whatever else, nevermind before you get to the business side of things, as well. It’s an amazing technology, amazing use cases, I think.

Tom Allen:

Yeah. I love it. And I’d love to switch it as well, just quickly because that was a great point you made, Jon. Just looking at it from maybe not the business user, but I got told a lot at AI summit in London last week, everything was on digital skill shortage. There’s not enough people to take these jobs and not enough people to go into it and AI people that can understand it and know it, is becoming more and more limited. And there’s a big push by the government to bring these courses in. So what would you say to someone that’s maybe 15 or 16 or maybe going into their first kind of career and they’re really excited by machine vision and computer vision, where would you say to look or what piece of advice would you give them? What kind of thinking ground roots level would you give them?

Jon Hazzard:

It’s a really good point. And it’s something that I think if we think back 20 years ago, I think when I was going through high school, it was really looking into data scientists and kind of computer science. And that was kind of the big major respect that where everyone was kind of focused and knew that it was going to be this huge piece of our world. I think AI is in the same space and I think computer vision is certainly in the same space. What I will say is that you got to stick with it. You got to realize that you’re going to hit a number of different blockades. I mean, everything from difficulty and implementing, changing the status quo are all things that we’re going to into, because it’s things that we run into every single day.

Jon Hazzard:

It’s going to be too difficult to implement. It’s going to be too costly. It’s not going to have a quick enough return of an investment. So you’re going to hit a lot of blockages. So, but know that this technology is going to change the world. It’s going to be here. It’s no different than the internet, 25, 30 years ago. It is the same type of technology, but there’s going to be a relatively slow adoption. It’s going to take some time to kind of get over that fold. So from anyone who’s a younger person looking to get in, stick with it. It’s going to be massive. And for any companies or people inside of companies that are working now that are kind of up against these AI initiatives.

Jon Hazzard:

And they know that they want to do something, they understand that this could potentially change the business. What I will say is find something easy to do, find something that is done inefficiently right now, but has a massive overhaul on the business and pick that as your strategy. Don’t try to boil the ocean. Don’t try to solve everything all at once. Pick one use case that you can do in a relatively small area, maybe 20 cameras around your facility but you know that it can really impact your business. Find that use case, come to Chooch because we help you solve it. But find that use case, create a plan around it, something that’s non-intrusive, but can certainly show that the value of computer vision. And once we show it once, it’s going to resonate throughout the entire business.

Tom Allen:

Love those points as well, because I don’t know why it reminded me, but it was looking at how expanding, like you said, you touched on it there, new initiatives, new departments being set up. And I remember Unilever was talking and they were saying how they’re saying a whole R and D department for using data science in retail. So you normally think of that like, oh yeah, you use it in drug discovery or pharmaceuticals. And they were like, wow, we’re actually putting together a 100 person team or how many people it was and applying data science to such a crowded market to understand trends of what shampoo you use. And I was just kind of taking a step back. I was like, this is getting mental. How many new departments are setting up and what they’re looking at?

Jon Hazzard:

Yeah. In time, it doesn’t have to be that difficult. And I often hear the same thing. People have 200 data scientists that are coding every day to try to drive these business strategies for them. And I’m like, that’s crazy. That’s a lot of people and in many times, the [inaudible 00:30:36] what the overall strategy is like, well, we just need AI part of our business. We just need computer vision part of our business. And it’s like, but what are you looking to solve? And I think that’s where it needs to be in these innovation groups, because they have innovation in their name, they’re trying to create these new things instead of trying to solve or find ways to implement things inside of their existing business that can just make it run a little bit better.

Jon Hazzard:

So you don’t have to bring everything and create everything from the ground up. Really just find something that has an issue that has a massive return on an investment with a relatively low lift. And that’s where we start. We start there and then we kind of expand upon that after the business becomes familiar with computer vision and what it can do to their business.

Kieran Gilmurray:

Yeah. That’s the interesting part, isn’t it? As Tom said, so everybody’s looking and you’re saying Jon, for the new thing, but if you automate, augment or digitize your current process, you can get that lift you’re talking about. So you were saying Jon or Tom, Unilever were talking about the new ways of doing, but imagine you’re in a supermarket or in whatever a store and you find the product adjacencies and look at the roots and paths that people actually take. There’s all sign around that, bio psychology or bio behavior, but it’s all done based on data and some degree guesswork. But if you could visually see what was happening in track and trace and heat map and see what people are doing and thinking whether they’re confused or not, that’s an existing use case, but could become so much smarter. Really, really interesting spaces.

Tom Allen:

Yeah. I love it. And I feel like we could ask you a lot more questions, Jon. And I’d love to know for anyone that does want to reach out to you and find out more about you, Chooch and everything you’re doing. And quickly on a side note to that, I actually remember trialing out the Chooch app because I remember emailing Jeff afterwards and saying, it’s awesome. And I remember using it when I was with my mates and we were out for breakfast, I think in Milton Keynes and I was trialing it and pointing it and it could pick out milk, it could pick out tables and it picked out my friend within an age bracket. So to anyone listening, I’d recommend downloading that and trialing it out, because it really gives you a visual way of how these things are implementing into your daily life. But tell us where can people find you? Where can they follow up with you and ask more questions and get in touch with you, Chooch and the wider team, Jon?

Jon Hazzard:

Yeah, absolutely. Anyone who wants to contact me, you can reach out to me directly. It’s [email protected] Please visit our website chooch.ai and sign up. We do have a free license available. So if you sign up, you can register there and kind of play around with some of our development tools, especially for the developers. And reach out to our website or our LinkedIn. We’re always happy to have conversations, even if they’re just consultative and just kind of understanding what it could potentially mean for their business, or if any of those innovation groups and those people inside of companies want to have some ideas on how they can bring this forward to their leadership and their decision makers, feel free to reach out. More than happy to have those conversations as well.

Kieran Gilmurray:

Love it. It’s one of the best I’ve seen in a while as well, Tom. So I would recommend going to it. I was going to say you really see.

Tom Allen:

I wasn’t very good in school, but I always say, if you took me back and put me in a VR machine, I’d be much better, because I learned by seeing and doing figure out to later on and second Kieran’s point. So it’s been really great to have you, Jon, and really appreciate and value taking the time to join us today and really interesting topic. Thanks for joining us and hope to have you back again soon.

Jon Hazzard:

Absolutely. Thanks guys. Appreciate it, Tom. Thanks, Kieran.

Kieran Gilmurray:

Thanks, Jon.

Learn more about Computer Vision

 

Share
TwitterLinkedIn