Computer Vision

Synthetic Data Webinar: Faster AI Model Generation & More Accurate Computer Vision

To build accurate computer vision models, you need data—and lots of it. Now, you can generate images with synthetic data and augmented data on the Chooch AI platform, and then use these synthetic images to train and deploy computer vision models. What you’ll learn in this webinar is how to use different technologies with the same goal: deploying accurate computer vision even faster. Watch the video or read the transcript below.

Emrah Gultekin: So thank you all for joining in today. And what we’re going to do today, we’re going to run through a lot of material. And so if you have questions, you can ask them during or after the webinar. But basically, what we’re talking about today is synthetic data. And it’s a part of generating data so that you can train the AI. And so at the end of the day, what we’re talking about really is some of the inferencing that goes on, and the problem that you’re trying to solve. So let’s say you’re tasked to detect something and it’s not in a pre-trained model, there are no pre-trained classifications for it or if they are, they’re not really good. So at the end of the day, what you’re trying to do here is, you’re trying to generate better inferences. So this is where it all happens.

And the inference or the prediction … You’re sending in video feed, or images and what’s happening is you’re getting responses for that. So it’s detecting simple things like cars and people, faces and so forth, very complex things like parts or scratches or types of cells, and so forth. So you can do a lot of stuff. So it really ends up here in inferencing. Which is important for us and important for the clients as well and also a part of all the ecosystem partners out there. But what’s happening is, if you go back to the cycle here, it really begins with data. So today, we’re going to be talking about these things and that is data generation to train a model. So it goes into here. And we’re not going to be talking about model training today but all of you who are on this call know something about this.

But what you do is you create data, and then you train the model, then you do the inferencing. And the inferencing helps you create data again. So this is like a cycle here that goes back and forth. So today we’re going to be talking about data generation through a series of tools. This is not just synthetic data, you’ve got manual annotation, you’ve got smart annotation and data augmentation, so forth. So today we’re going to be talking clearly about this. The result is inferencing all the time, so increasing the accuracy, increasing the stability of the model and creating those dynamic models that we all dream of. So the question becomes, where do you get the data?

So the data, you have public data sets, you’ve got client data sets, you can do web scraping and so forth. But at the end of the day, the issue has always been … And this is particular in visual detection, and that’s what we’re talking about today, the visual AI. In particular, what you’re seeing is that the models that you train, you need to have a lot of data. And this is like in the thousands of images per class. So where do you really get that data? There are ways to do it. You can scrape, you can get client data, you can get public datasets and so forth. But it’s usually not annotated, it’s unstructured. And it’s not enough. So that’s the question here is, where do you get it? And one way to do is to synthesize the data. And that’s what we’re going to talk about today is getting some base data, some real data and then synthesizing that to create diverse data sets in order to generate the data set necessary to train the model or train multiple models at the same time.

So this is what this is about. And on our platform, you can do this but you can also import already generated data sets from somewhere else and generate the model. So you don’t really have to do it on our system, you can do it on a partner system, on another ecosystem partner who actually does this type of data set creations as well or if there are people who do annotation and so forth. So you can actually upload data sets that are already created somewhere else and then generate the model with one click basically.

So the problem here is data sets require lots of labeled, high quality, and usually copyright free images. So that’s a lot of stuff and it’s very difficult to overcome that. And so in order to do this, your goal is to generate a computer vision model that can work in real environments. And this means you really need a lot of images in sufficient variation. This could be coming through video frames or it could be coming through images itself and whatnot, but you really need a substantial amount of them. And the data that you generate has to have a minimum of labeling errors. So it’s not enough to just have raw data, maybe having files that are labeled but you also need to annotate them, especially if you’re doing object detection. And this is a very slow process to do this manually and intrusive lots of errors and takes a long time. It’s not scalable, basically.

So manual annotation, it’s necessary. You have to do that because that’s how you see the AI and you see the model, see the data set but it’s not scalable for a number of reasons. Humans are not scalable. So the conclusion is you generate synthetic images of the annotated objects to train a model and detect real world situation. So this is really what we’re doing here. So you need lots, labeled, it needs to be high quality. So the workflow and methodology we have here at Chooch AI is quite simple actually. You sculpt the problem with the client or with the partner. Basically what the case is, what is the client trying to detect? What are they trying to detect? And usually, the answer to this is not that simple.

And sometimes you have to walk through with the client to understand what they’re actually trying to achieve with visual detection. You sculpt the case and basically on the technical side, you start to check the performance of the pre-trained models or existing models. So you want to be able to use a pre-trained models as much as possible to deploy a model or inference engine to the client. But you also have to understand that usually, these pre-trained models are not sufficient for production purposes unless they’re done for a specific purpose. Then what happens is you generate data. And basically what you’re doing is you’re reading this data to augment the data that already exists there or generate something from scratch.

The best way to understand if your data set is good it to train a model and then test it. And then when you deploy it, you get feedback from the users and then that goes into the data set again. So it’s the cycle where you’re generating models which are dynamic. So if you’re just generating a model that’s static and you say this works out of the box for everything, that’s usually not the case. You need user feedback, you need different types of feedback to enhance the data set as you move forward. So this is a more extensive workflow here. But basically what it is, understand the available data in the detection requirements from the client. You generate the data for data gaps, that’s the second step of that, train and test the models.

So you’re trying to increase the accuracy of the model. And that is the F1 score that we talk about which is a harmonized mean between precision and recall. You want to increase that. Then you deploy the models, receive feedback from users on correct, incorrect or unknown objects or maybe there’s something new that needs to be trained, you want to put a new class in. So you can receive that feedback directly from users annotated or not annotated. And put that feedback back into the data set and retrain the model. So you have that cycle there where the workflow is very crucial to the scalability of this entire system. So these are some of the tools that we use on our system, you have obviously manual data set labeling, you got smart annotation, 2D synthetic data, 3D synthetic data, data augmentation. So today, we’re going to be talking about these which are really related to data set generation, you go into model training and dense training, cloud inferencing, F1 score calculation, unknown object detection, user feedback and edge inferencing, device stream management.

So you have this gamut that you need to be able to do to deploy at scale with clients. But today we’re going to be talking about these with which is synthetic data, and some of the annotation tools that we have. We have manual annotation, obviously very, very important. And then you have video annotation, which is annotating a video and then having the tracking system track that object throughout the video and generate a data set based on that, which generates hundreds of thousands of images through a video, basically. And then you have smart annotation with AI, which is using the inference engine, again, to annotate new data. So this is part of that cycle where you need the inference engine to do the annotation work. And so our core is the inference engine as a company. So we’ve focused totally on that. And it’s important to understand that really closes the loop on the dynamic model creation cycle.

Data augmentation is also very important here. And then you have synthetic 2D data and then synthetic 3D data. So we’re going to go through these today. So manual annotation is very straightforward. It’s pretty much people drawing bounding boxes or polygons around objects. It’s a painstaking process and it requires training, and it’s not very skilled. But actually it’s a very important part of this because if you don’t have the manual annotation, you can’t really teach the AI new things and increase their accuracy of the inference engine. So even if you have a machine annotating, it is really based on the human who annotated initially that data set to train the model to do the inferencing for the annotation. So it’s a very important part of this process.

And this is a basic thing from our dashboard. You have the … Basically just growing bounding boxes around it and you can label it whatever you want. And you continue doing this through the entire data set. And then you have these different classes over here, which will show up as you annotate. And this is part of that. So you see the entire gamut of images that are produced in that data set. And you can see a video over here as well which is actually part of this. And then you can see here that you have synthetic data, you have smart annotation, and you have augmentation as well. And you can add images to this data set and whatnot. The video annotation is also very important because it allows you to scale the amount of frames that are being put into the data set.

And this is an important tool, especially if you’re doing something very specific for a specific task, it actually tracks that entire object, unknown object it tracks unknown object, unknown action through the life cycle, through that video, and then generates images into that data set. So this is a conveyor, uploading this … It’s just an MP4 video. And basically, what you do is you click on it, and you start annotating. And you can choose which area of the video you want to annotate. You’ve got an orange part here, and then you’ve got a blue part, and basically you click on annotate process and it’s processing it. And it’ll just follow these through the video and generate a data set for it. So you do once and it’s already generated like 80 images each of that. So this is a very important tool especially if you’re doing something very specific in a specific field or you have lots of video and you want to annotate it. And then you generate this data set for you.

Another powerful tool or smart annotation. And this is something that we launched a few months ago. And basically what it is, is you’re using the inference engine to annotate already known objects. And these give a pre trained by our system or can be pre-trained by the user, by the enterprise who’s already using the system. And you can use your custom object models, or you can use basically pre-trained Chooch object models as well. And it automatically annotates entire image data without manual labeling. So it’ll annotate everything and then you can review the annotation done by the machine. This is a very important tool, again, the inference engine. Inference engine has to be very strong to be able to do these types of things. So we got some spark plugs here.

You click on the smart annotation, and you press my object models, press a spark plug and you press start and these are unannotated, these images, and what basically happens is when you press start on this, it automatically annotates everything in the data set with the spark plug. And then you can use that to train the model. So you’re actually … What you’re doing is you’re constantly building a data set, automating the building of the data set through these tools in order to train the model as you move forward. Data augmentation as well is very, very important. And this is something that’s done on the back end of some of the already existing deep learning frameworks. But what you really want to do is be able to tune it, if you’re a power user. And so in some instances, your data … Even if you have a lot of data, it’s still not enough to train the model because it doesn’t have the right views or there’s not enough noise and so forth.

So you want to be able to do data augmentation here and to do it on the system as well. It helps generalize the existing data set basically. So we have a part data set here, it’s part of the spark plug. And then what you’re basically doing is upload … Let’s say you have these five images of these different things, and they’re annotated and you annotate them basically. And then you have these images here with the different parts, press augmentation and you can … This is over here and press the augmentation thing and it pops this up. And you can go up to 100 iterations, we don’t recommend that, we recommend three or four X the amount, it just over fits but you can play around with it basically. So it’s rotation, horizontal flip, cut out, shifting, scaling, basic things that you do and there are default settings here but if you want to play around with it, you can do that. That’s fine.

And scaling, rotation, noise, blur, brightness, contrast, and you start the augmentation on the entire data set basically, randomize it … Based on randomization principle and then it generates all this. So you’ve got 1000 of each here, which is a lot. And then you can use that to train the data set basically. You can see here that this is quite different from the original one, that’s flipping around, changing the noise, changing the lighting, and coloring and all that. Another tool, which is very, very important is the 2D synthetic data. It’s almost like a combination of augmentation and synthetic data. It’s a lot of augmentation, actually. But what you’re basically doing is you’re creating bounding box free transformations, and it’s auto background removal. So when you annotate something, it segments that out and places in different environments basically, and that’s what this is about.

Rotates it, flips it around, does a lot of different distancing, and then creates that data set for it. And it’s all randomized and that’s basically part of this process. So we go back to a part data set here. And this is the same part. So I have a quick release part here. And I want to create a 2D of this. And I’m creating another thousand maxim object count per image. And you can choose the different themes that you have. You can use your themes, or you can use our themes. And this is basically just background. So based on what that environment is, it could be the sky, it could be industrial and so forth. And you choose the raw data here, conveyor belt, background and they’re generated. So I’ve got a conveyor belt background here, I generate that and basically it generates all these images with these parts on conveyor belt.

And these may not look that realistic but it’s good enough for the randomization of the data set. And that’s what you’re trying to do. Randomize the data set so that it creates a more robust model. So we can move on to 3D synthetic data, which is also important. But it’s important to understand that not all companies have CAD files. And this is a requirement for 3D synthetic data, you need to have a object file and the material file. So it’s textures and the object, and you just basically upload it. And it’s similar to the 2D that you choose a background theme and number of images and it generates it with the bounding boxes inside of it. Let’s do this in 3D data set. So you can see here it’s ATR 42 object. And then it looks like this, you rotate around and do what you need to do with it.

And basically, what you’re doing is how many images do you want to generate, press … You have advanced settings here for power users, and that’s like gray scale, in object files as well. You can take out and randomize that and then basically press choose themes or your own theme, and then this is in the air so it’s sky. And then basically say generate. And it’ll generate these and all you have is you get this data set where you have 3000 images of the ATR 42. And then these are some of the sample images that come out. So again, these are semi-realistic but good enough for the data set to be generated to train the model. And that’s what this is really about. Normally best practice is always to use real data to augment any type of synthetic data and vice versa. So if you do have real data, it’s important to have that as context because things may be out of context as well.

So it’s important to generate these together at the same time. So what’s the result here. I mean, why are we doing this? This is I. Very well. It’s higher F1 scores, higher accuracy and dynamic models. So model drift, you’ve got problems with accuracy, and you need that feedback and data set to generate constantly higher F1 scores. So here, what you can’t do on the system is you can upload a data set, that’s what we’re going to do, a test data set, parts testing data set. You upload the entire data set which is already annotated. And it’ll start generating F1 and then give you that score. So in order to make this a very quick process of understanding how that model actually is delivered and is being used and what the accuracy is, you need to be able to have this test data set. And then you get the different types of F1 scores here based on that.

We usually recommend deploying after 90, you can deploy on production. Anything below 90, it depends on the use case but can be problematic. So you can see that here and then automatically deploy on any of the devices that you might have or just use the API cloud. So we can go on to questions now and I will just stop sharing this. Actually I’ll keep sharing because I might refer back to it. Yeah.

Jeffrey Goldsmith: Yeah. Thanks, Emrah. That was a pretty complete overview of how we generate data. So we do have a few questions. For the benefit of those who don’t know, please explain why should we use synthetic data? I think we’ve answered that.

Emrah Gultekin: Let me get into detail on these questions because this is quite important. You don’t have to. Synthetic data is not a must for everything, it’s a tool. It’s a component of something to do if you choose to do that, depending on how much data you already have. If you do have real data, that’s always better. But from our experience in the market, it’s very difficult to come across that and synthesize that data. I’ll give you an example from text recognition. So text detection, text recognition, it would not exist today without synthetic data. That stuff is all synthetic, understanding texts in the natural environment. So we believe that’s going to be the case with object detection and image classification as well. But it’s not a must to be able to train a model, you don’t need it to train a model but these are just tools to help. And again, there are companies that we partner with that generate synthetic data as well, or do annotation work. And you can upload those into the system and train the model. Our core as a company has always been the inference engine.

Jeffrey Goldsmith: Okay, the next question, is it possible to try the synthetic data generation tool before committing to a trial in the system?

Emrah Gultekin: Oh, yes it is. So just get in touch with us. We’ll upgrade you to enterprise for a trial without a fee basically, so you can use it. It’s out of the box.

Jeffrey Goldsmith: Yeah. It just requires that you … For the 3D synthetic generation, as Emrah said, you need a CAD file to make that go.

Emrah Gultekin: CAD and material file, yeah texture. If you don’t have the MTL File, the textures won’t be randomized basically.

Jeffrey Goldsmith: So is this a web based tool or does require local scripting coding and Docker deployment?

Emrah Gultekin: So this is a web based tool. And it all resides on the cloud. So you can basically log in from anywhere and use it. So the annotation tools and some of the synthetic tools, that’s all web based. And the inferencing is also web based, unless you want the inference engine deployed on the edge or on-prem and you can do that through the system as well. So you can set up a device like any of the Nvidia GPUs, even the Jetson devices and then pull the Docker and have the inferencing run on the edge as well. But yeah, you can use it on the cloud.

Jeffrey Goldsmith: Yeah. And you can sign up today in the upper right corner of the website and start using it. We have a question. What challenges have you seen in using 3D synthetic data? Challenges, Emrah?

Emrah Gultekin: Yeah, so 3D synthetic data or 2D synthetic data, it’s not a panacea. It’s not going to solve all your problems, it’s not going to … And so the issue has been the expectations of the market. And the 3D synthetic data in particular is harder to come by because it requires a 3D model of that particular object. And so that usually … If you’re a manufacturer of that object, you probably have it. But outside of that, you’re not going to get a 3D model unless you work with let’s say, a 3D model designing company that can generate those. But that’s been the challenge with 3D is just getting the CAD and the texture files ready. And that’s something that is overcome by some of the clients that we work with.

Jeffrey Goldsmith: Okay, the next question is quite important. I suppose this person missed the beginning of the presentation. Can we create models using Chooch or just generate data? And the answer is yes. The point of creating data is to generate models. So we generate data, we create models, and then we do inferencing, the whole life cycle is there it at Chooch AI.

Emrah Gultekin: So the whole point of generating the data is to create a model. And so you can do that on the system as well. We can go into more detail on that on another webinar. But basically, our whole thing has been the inference engine, which is the model itself. And by the way, to test the data set, the best thing to do is to train it because you’re never going to be able to test it otherwise properly.

Jeffrey Goldsmith: Right. And that leads us into our next question, which is, could you give a more intuitive explanation of what an F1 score is?

Emrah Gultekin: Yeah. So F1 it’s what data scientists like to use but it’s basically another … It’s a fancy name for accuracy. So accuracy is made up of two things. It’s basically precision, which you would say false positive. So is this an airplane? If it says it’s a helicopter, that’s a false positive. And then you have recall, where it should have detected something but it didn’t. And that would be a false negative. So it’s basically just an average of those two. It’s just a fancy name for accuracy.

Jeffrey Goldsmith: Yeah. We are getting some questions in chat too. So I employ you to enter your questions in the Q&A, but we’ll get to the chat questions after we’re done with the Q&A. Let’s see here. Any recommendations on how much synthetic versus real data is needed for a successful computer vision model batch size?

Emrah Gultekin: Yeah. We can talk about our system where you create the models, we recommend a minimum of 1000 images per class. And so that would be … Depending on the use case that you’re doing, real data, you want to have at a minimum of 100, 150. So about 10 to 15% of that. And then you can generate some 2D on that and some data augmentation. 3D is a separate deal. You can generate more on that and there are different types of best practices for that. But you want to get to about 1000 images. How you get to there, that’s up to you. You can use synthetic, you can use … If you have real, that’s always better. But a minimum of 1000 and then just keep going up from 1000 for production purposes.

Jeffrey Goldsmith: Okay. Do you support deployment on mobile? We actually have a mobile app. But I really want to …

Emrah Gultekin: Yeah, so the deployment on a mobile is through an API. So we’ll be hitting the cloud. If you’re talking about the models being deployed on mobile, we don’t do that at the moment. But we do deploy on Nvidia Jetson devices, also Intel CPUs as well. But in terms of the mobile apps, it’s traditionally been just API call on the cloud.

Jeffrey Goldsmith: Yeah, we actually have a mobile app if you search in any app store for Chooch IC2. You can install it on your phone and try it. It basically sends screen grabs from your video feed to the cloud and it sends back metadata. It’s pretty cool, actually. So next question, how important is it to train the data with different backgrounds? Can we load our own backgrounds with different conditions and lightings to train? There we go.

Emrah Gultekin: Yeah, it’s a good question. So we have some preset backgrounds that you can use. But usually what happens is the client has their own warehouse or manufacturing area, then they can load up that background as well. And so you can use that background to synthesize your data as well. So that’s under my themes. And basically, you just … You can upload to a data set itself or you can upload to raw data, and just pull that in when you click onto my themes.

Jeffrey Goldsmith: Okay, great question. What if my data is sensitive and cannot go out of my company? Is it possible to deploy it on company servers?

Emrah Gultekin: Yeah, so this is a great question. And what we’re doing is we have … The inference engine is Dockerized. And you can deploy the inference engine on prem, which means none of that video inferencing or image inferencing will leave your device or your on prem installation. For the entire system, we’re working on Dockerizing that, including the training system and the data generation system, and we have clients who are waiting for that actually. It’s quite important. So we’re going to have that out by the end of the summer where you can basically take it and use it anywhere you want on any of the servers.

Jeffrey Goldsmith: Yeah. And the next question is a comment. We need a webinar on model creation. Well, I’m sure we’ll publish one on that very soon. Is there an advantage to using a tight bounding polygon over a simple bounding box, advantages of background removal?

Emrah Gultekin: Yeah, that’s a very good question. So depending on your use case, bounding box or polygon, you’re going to choose between the two basically. And data scientists know which ones work for what. For 90% of use cases that we do, bounding box is fine. But even the bounding box itself, it actually segments that piece out. And that’s how the 2D generation is generated. So background removal is crucial. You cannot do background removal without polygon segmentation. And that’s how the 2D synthetic works. But in terms of inferencing and creating data set, bounding boxes are usually enough for generic deployments. We have sensitive deployment like satellite imagery or radiology, you definitely need polygon.

Jeffrey Goldsmith: Yep. We already answered to some degree, perhaps you’re getting to dive into this a little bit. Do you support edge inferencing?

Emrah Gultekin: Yeah, we support edge inferencing. The inference engine is exportable into the GPU devices and also Intel devices. And it’s basically a Docker that you pull and it gets set up within half an hour. And you can put models onto it, you can do updates, you can erase models, you can visualize the inferencing on it. So yes, edge inferencing is a crucial part of this whole system because if you don’t have edge inferencing, it’s virtually impossible to scale video in the long run.

Jeffrey Goldsmith: And we’ve actually got documentation on how to deploy to the edge on our website, under the products section at the bottom. There’s a few different help documents. Next, synthetic data looks different. How can we know it will perform well?

Emrah Gultekin: Yeah, so it looks different and you’re going to have to iterate on it. And that’s what we do as well. And the way to iterate is you create a model, basically. And you create the model on the system and see how it performs, test it on the system that you’ve created the synthetic data on, it’s basically all together. So you’re right, you need to iterate on these. It’s not going to perform well the first time. It’ll perform … Depends on the use case, obviously but usually it takes us about four to five iterations to get to a model that is production ready.

Jeffrey Goldsmith: Pretty technical question here. Do you use GANs or VAEs to create synthetic data?

Emrah Gultekin: This is a good question. And we are not using GANs or VAEs to create yet, but we’re using randomization. I think some our ML engineers can answer this much better. But basically, we’re in the process of putting GANs into the system, though, as we speak. It’s a good question.

Jeffrey Goldsmith: Here’s a pretty good on thought leadership question, what impact will synthetic data generation have on the future of computer vision?

Emrah Gultekin: It’s going to be a … So think of it this way, you do have a lot of data out there. But it’s unstructured. And our duty as people in the computer vision industry or AI and generalist is to structure that data. So the way you structure data is by seeding it through the inference engine in order to detect things. So the impact it will have on computer vision is enormous in terms of getting the models out and having different types of models. We think it is. Synthetic data is very, very important. But again, it’s a tool. It’s not the end all in this. So we don’t want to overemphasize something. It is an important tool but it is part of a larger thing that’s going on in computer vision.

Jeffrey Goldsmith: Is the background combined with data and synthesized? I’m not quite sure I understand the question there.

Emrah Gultekin: Yeah. I think I do but maybe I’m wrong. So in 3D synthetic data, you have the object file, the CAD file, the material file, MTL and then you choose a background as well. So you choose the background images. It could be an already generated theme on the system that we have or it could be your own theme. So it is synthesizing randomize with those backgrounds. So the background is not synthesized. The background is what you put into the system. So you might have like 2000 images of background of your warehouse and that’s where you’re going to be placing those different objects.

Jeffrey Goldsmith: Okay. The last question in Q&A and then we’ll move over to the chat. Can you please explain using randomization for synthetic data generation?

Emrah Gultekin: Yeah. So in terms of randomization, you have different tools here that you are able to … There are toggle switches basically. So if you’re a power user, you can randomize the way you want and basically, place things in the places you want or you just use general randomization. And it’s on a randomization principle, where you can basically just let the machine randomize what it’s doing. But what’s important here is if you are a power user, you want to be able to control that randomization. And so you have those toggle switches in advanced settings to do that.

Jeffrey Goldsmith: Emrah, are there any particular verticals that … E.g manufacturing where you see more adoption of synthetic data?

Emrah Gultekin: Honestly, the clients don’t care about the use of synthetic data or real data, or what type of data you’re generating. What they really care about is how the model performs out of the box, how the model performs in their environment. So what we’re seeing more and more though is that the synthetic data is being used more in certain verticals where there’s a lack of data or lack of structured data, and manufacturing is one of them actually. But we’re seeing it more in geospatial.

Jeffrey Goldsmith: We keep getting questions, which is fine, we still have some time here. How can you generate, for example, a partially covered stop sign with automated labeling? I mean …

Emrah Gultekin: It’s a good question. And so this goes into CGI really and it is a universe unto itself. So you need a CGI person to create that base data to generate that. [crosstalk 00:54:46]. So you generate one of them and then basically the system randomize the rest.

Jeffrey Goldsmith: Exactly. Is it possible to train them on like … We answered this to some degree but is it possible to train the model and export it in order run it without Chooch?

Emrah Gultekin: This is a good question because what happens is these models that you see on the Chooch system are not really single models. They’re always ensembles, and they’re encoded with information from the past, from the purpose that it has already generated over the years. So it’s not possible to do it without the Chooch system on prem or on a device, but you’re able to export the system as well. So you can export the system to wherever you want to, but you can’t take these models out of a vacuum and have them perform.

Jeffrey Goldsmith: Let’s see here, could you say which one works better, training the model with synthetic data entirely and later fine tuning the model with real data or training the model with a hybrid data, synthetic data and real data? Depends on the use case.

Emrah Gultekin: So it’s always better to have real data, that is key. It’s always important to have that. The more real data you have, the better your model’s going to perform in the long run. But if you don’t have real data, you need to synthesize it. And that’s where the synthetic data comes into play. So if you do have real data, it’s important to have that with the data set that you’re basically trying to train.

Jeffrey Goldsmith: Yeah. Last question … Oh, there’s even one more in the Q&A. They keep coming. So I want to share the preset with coworkers. Where can I find the video link? I’ll post it on our blog tomorrow, sometime in the afternoon but it’ll also be automatically sent to everyone who signed up for the Zoom webinar and on LinkedIn as well. Let’s see. Does Chooch work with all the major public cloud vendors, AWS, Azure, IBM, etc?

Emrah Gultekin: Yeah, so currently we’re on AWS. And we’re looking into some of the others yet so that … The Dockerization of the entire system is important for us as a milestone. So once we have it Dockerized, you can basically take it anywhere. You don’t even have to contact us. It could work on private cloud, on prem and so forth.

Jeffrey Goldsmith: Okay, so let’s move to the chat here. First question is, is it a web service? And the answer is yes. If you go to Chooch.ai, go up to the upper right corner of the screen and sign up and you’ll see our web service right there. The second question, OpenVINO needs pre-trained models, correct? So perhaps your solution is more adequate for TensorFlow training. Does that make sense? Look at the chat.

Emrah Gultekin: Yeah. Okay. The backend of this is PyTorch, Balloon, TensorFlow, TensorRT for the compression and so forth. These deep learning frameworks, some of them perform better in different environments. So it’s always an ensemble on our system. But TensorFlow is definitely one of them. So we do use TensorFlow for some of the image classification.

Jeffrey Goldsmith: Another question from the same attendee, your platform is only used to generate training data sets and annotations or do you put together object AI recognition applications too?

Emrah Gultekin: Yeah. That’s a good question. So our core is actually the inferencing. So the models arc is our core. These are only tools to beef up the data set in order to make the model better. So, that’s really what this is about. But the core is the model. So you just click create model, and you click the data set that you’ve generated and it’ll just generate that model, and then you can test the model on the same system.

Jeffrey Goldsmith: And we’re almost done with the questions here and we’re 10 minutes before the top of the hour. Does your system handle domain adaptation? Domain adaptation. Do you see that question, Emrah?

Emrah Gultekin: Yeah, I don’t understand the question.

Jeffrey Goldsmith: Yeah. Matthew, could you rephrase that potentially, if you’re still on the call and we’ll answer that in a moment. How much accuracy increased going from 2D to 3D synthetic?

Emrah Gultekin: So we can say, in general, depending on how much data you have, from real data manual data, which you might have 100 images of and then you create 2D let’s say with augmentation, 1000 images, you’re going from 50% accuracy to about 90% on average. But it really depends on the use case. From 2D to 3D, there’s no comparison because it’s a different thing. 3D synthetic data is very, very different. So we don’t have metrics on that. But from going from real data to any type of synthetic with augmentation, you’re basically reducing the accuracy by leaps and bounds.

Jeffrey Goldsmith: Okay, looks like this our last question. Usually, 2D image, object detection is feasible. What about 3D image object recognition? Where are we on that?

Emrah Gultekin: We don’t do 3D object detection or recognition. It’s 2D. And the reason is because the market … All the sensors are 2D, all the cameras are … It’s an important question and we see a future on that. But the current market is all 2D.

Jeffrey Goldsmith: Okay. Well, thank you everyone for your questions. Oops, there’s one more. Hi, in my experience, that question about domain adaptation, isn’t it a sub discipline of machine learning which deals with scenarios in which a model train on a source distribution is used in the context of a different but related target distribution?

Emrah Gultekin: Right, okay, so you’re talking about the domain in that sense? Yeah. If the domain is changing and you have … Let’s say the views are changing, the domain is changing, usually you need to tweak the model. And that’s part of the process of creating a dynamic user feedback for a grift. That’s really what this is about. So if you go back to the earlier slides where you’re getting new user feedback with annotated or with just raw images or video streams, with a change in domain, you’ve put that into shadow data sets which are checked by humans and checked by a machine. And then either you retrain the model, or you create a new model if the domain is very, very different. So there are few layers that are going on to trigger different models, depending on the scene and on the domain.

Jeffrey Goldsmith: So second last question. If anybody has anything else, please post it now. So what kind of support do you have for transfer learning? Transfer-

Emrah Gultekin: Yeah. So the whole system is based on transfer learning actually, that’s how we’ve generated these models and these classifications as well. So it’s based on transfer learning from a base data set that we’ve trained initially, and we keep retraining that as well.

Jeffrey Goldsmith: Okay, great. Well, I think we are done here. Thanks for the talk. Can you share the recording with us? Absolutely, I can share the recording with you. It’ll be on the blog tomorrow. You’ll get notified by Zoom. Look for a link on the LinkedIn invite for the event and we’ll post it there. Emrah, thanks for the presentation. It was really well done. And thanks to the team for all your hard work putting together this technology. Please get in touch with us if you want any one on one support. And we’ll talk to y’all soon.

Emrah Gultekin: Thank you very much, everybody.

Learn more about computer vision with synthetic data.

Share
TwitterLinkedIn
Categories
New Gartner Report! Discover how AI-powered vision systems can provide real-time monitoring for workplace hazards. Access your free copy →