Placeholder Image

Subtitles section Play video

  • Hello everyone and welcome to my channel.

  • This is Sudhini from South Bay, California and welcome to my channel where we talk about everything AI.

  • And today we are going to be covering a new annotation tool which is scalable and super useful for video images.

  • So if you have videos of lung CT, MRIs or even if you have videos of outdoor images or indoor images that you need to be annotated, then this is the software for you.

  • The software is called Scalable and I will be walking you through all of the steps right from installation to the requirements and how to annotate stand-alone images, biomedical images to video tracks which is autonomous video tracks.

  • So if you are an annotator who wants to gain some extra bucks doing annotations at home for some companies or if you're a startup or even an engineer who wants to make sure that you have some group of data that you want to quickly get annotated but you want to ensure quality in this whole process, then this video is meant only for you.

  • So keep watching and let's get straight to it.

  • So this is the software that we are going to be reviewing today.

  • It's called Scalable and it's by Berkeley's Deep Drive, so BDD.

  • All of the documentation that we are going to be following is in this particular guide and I'm going to walk you through each and every one of these steps specifically for my Windows-based system because these directions are mainly for a Linux system.

  • I will show you the changes that need to be done in order to run it on a Windows system.

  • So far, I have done another review on another annotator which is called LabelMe and I'm going to be putting the link right up here.

  • So I wanted to show you what the differences are between LabelMe and Scalable before I got to the review.

  • So LabelMe, as you have seen before, it supports single-user annotation and so it will work for single-user situations.

  • Then it also supports JSON to image label conversion.

  • So if you have JSON format data, you can also convert that into images, which is typically required for semantic segmentation or unit sort of algorithms.

  • You can quickly generate images out of that.

  • The installation requirements are pretty on the low side, so it's very easy to install for a standalone computer.

  • It is definitely very useful for small-scale projects.

  • I have met a lot of developers who are essentially using this for their day-to-day project works.

  • And it is preferable for single images.

  • So single images means if you have snapshots of maybe your retinal images, if you have snapshots from different patients, then this is very preferable for those sort of situations.

  • Now let's look at what Scalable can support.

  • Scalable now uses multi-user annotation.

  • So if you have, let's say, a huge task, so 900 or 1,000 samples, and all of them belong to particular videos, what it will do is it will automatically create tasks for multi-users, so 3, 4, 5 users, and then each and every one of the vendors will have their own link to go and annotate.

  • So now you can actually scale your work from one annotator to multi-annotators.

  • You do not require, so in this case, the outcome is actually the output is in JSON format.

  • So you cannot really generate images out of that.

  • So in order for you to generate, again, images out of the JSON, you will again have to go back to LabelMe.

  • The third and the most key change in this case, it's actually that Scalable has a Docker container already created.

  • So it has a Docker image that you are going to be pulling from Docker Hub, and that is the one that is going to be running.

  • So essentially, you do not require any installation requirements on your system.

  • If you have Conda, if you have the Anaconda environment, and if you have Docker set up, it will do everything else.

  • And this is the very latest trend in which most of the applications are going towards.

  • So you don't really have a list of requirements that you need to import or all the functions or the libraries you need to import.

  • You just get the Docker image down on your computer and you just run that.

  • So it's very useful and it's very easy to distribute once your Docker container, all the requirements are satisfied.

  • It is very easy for scale up, like I mentioned.

  • So once you know how to run your Docker container, you can ask four or five people to do the same process.

  • And it is very easy to replicate.

  • And this is highly preferable for video tracks.

  • So if you're doing video annotation where you have the same objects being followed across different image frames, then that's where Scalable is the most useful.

  • So let's get straight to the installation and the annotation processes now.

  • All right.

  • So in order to install this particular annotation software, which is called Scalable.ai, there are a few things that we have to make sure that your system has first.

  • First off, you need Docker.

  • And if you're using a Windows system, which I am, I have Anaconda on top of that.

  • So this would be the command in order to install Docker.

  • Now, I will be enlisting all of these commands in the description box below so that you can apply it for your own case.

  • So, you know, first of all, let's say that the Docker has been installed.

  • The next thing you need a Docker desktop for your Windows, especially if you have a Windows system.

  • So this will take you to this particular page.

  • You install the executable.

  • And once the executable is installed, especially if you have a Windows machine, it will ask you to also enable the WSL backend, which will ensure that you can run not just in Windows, but also Linux containers.

  • So this is very important for you to do.

  • Once all of this is done, you should then be able to launch your Docker desktop.

  • And in the Docker desktop, you can actually go in and generate your Docker login, which will then give you access to all the different kinds of, you know, Dockers that are available to you.

  • So you have to make sure all of this is enabled first for you to, you know, access this particular annotator, which is which has been pushed as a Docker container.

  • So now that Docker has been installed, let's go on to the next section.

  • You start with the GitHub repo.

  • So this is a GitHub repo that you unpack in the location of your choice.

  • Now, notice inside this particular folder called scripts.

  • So these are the these are the shell scripts that you need to run in order to ensure that your local environment is exactly what this app requires for it to run.

  • Now, the first thing that you that this particular thing is going to do is it's going to set up a local directory.

  • So what this particular code does is it creates this folder called local data.

  • And this local data is sort of from where this particular app is going to, you know, access images if you require.

  • So let's say that I have this new batch of images that I need to annotate.

  • I am going to go ahead and put them inside this this local data items.

  • You know, CT.

  • These are the CT images that I want annotated.

  • So what I do is I go ahead and I put them inside this items folder.

  • As soon as I do that, what it does, what the app does is it generates a fake local path corresponding to each and every one of these images.

  • So the app can only access images through URL.

  • So if you have images that are available sitting in S3 buckets or GS buckets, then you can easily annotate them from there directly using the URL.

  • All right.

  • So once that is done, the you know, the one.

  • So once you know you have your batch script and you have your local directory set up, then you need to do a docker pull.

  • So let me go to my Anaconda PowerShell.

  • And here I am going to be.

  • First of all, let me activate the virtual environment that where I have Docker.

  • Now I need to go to the path where everything is stored.

  • Right.

  • So documents, annotations, scalable master, scalable and that's it.

  • So I am currently at the location.

  • So now let me check for Docker images.

  • So I have already done a Docker pull.

  • You see, this is the scalable slash WWW and you see it's 5.12 gigabyte.

  • So make sure you have that much space in order for it to run.

  • The other thing that needs to happen in order for this Docker image to run is, again, there are a few there are a few things that you need to pass to it.

  • One of them is a config file.

  • It's the config.yaml.

  • So once this is there, I just run this command.

  • And it is now ready to accept accept any commands.

  • Right.

  • So let's say that I am going to go to this particular page.

  • And first off, we start by creating, creating projects.

  • Right.

  • So you see, I've already had two projects.

  • But for this case, let me start a fresh one.

  • So CT images.

  • Right.

  • And in this case, these are all images labeled type.

  • I'm going to make the polygons.

  • It's 2D segmentation.

  • Now, there are three different attribute files that need to be shared.

  • These are the three files that I'll be sharing.

  • First off, if you see, it is it is going to be generating these HTTP links.

  • These are the fake path links that I talked to you about.

  • But if I just go to a particular URL and if I just paste this, you'll see that you can now access this image.

  • So if again, like I mentioned, if you have any image in an S3 bucket, you can just call the S3 bucket path here.

  • And this video name is going to make sure that it belongs to a separate video track.

  • So this is the important image list that you need to pass.

  • Then it's the categories that you want to annotate.

  • So it's, you know, ground glass opacity.

  • And then there is lung.

  • That is what I need to annotate.

  • And then the segmentation attributes.

  • In this case, I'm calling it if it's blurry or if it's truncated or if there is writing on the image scans, then, you know, behave separately.

  • So item list.

  • In this case, it's the image list categories.

  • I already have the categories and attributes.

  • Again, segmentation attributes is not a super important file.

  • And now if I say task size, I'll say give me 10 images per task.

  • And when you say, you know, dashboard, it will take you to the dashboard.

  • Right.

  • So here you will see these are the task links.

  • So whenever, you know, you are you're doing a particular set of jobs, this will take you to the task.

  • So this comes as the first task.

  • So here you can essentially run each and every one of the 10 images.

  • Like I mentioned, and see if they are, you know, what you what you need or not.

  • So let's say that.

  • Let's start annotating.

  • Once all of this is done, in order for you to now download in CT, you now have this option to download labels.

  • If you do this, this is actually going to tell you that in this particular in image 0, these were all the vertices corresponding to, you know, Lung and Activity.

  • So this could be, you know, again, it is always in the JSON format.

  • You are not going to get an image format.

  • So you will have to use something else in order to, you know, compute from JSON to your images.

  • But this is only going to give you because, you know, this particular software is made compatible with autonomous drive sort of scenarios.

  • So that's the reason why it will only give you outcome as JSON files.

  • So now let me do a new one.

  • And this new project I'm going to call video bounding box.

  • And in this case, I'm going to call it video tracking labeling by bounding boxes.

  • In this case, the examples.

  • So the so the attributes and everything.

  • It's already an image list which is present.

  • Again, these images correspond to autonomous drive situation.

  • So image list is already there.

  • Categories.

  • I'm just going to call categories.

  • Attributes is going to call B box attributes and say go to dashboard.

  • So as soon as I go to the dashboard, you see the new one is actually created.

  • So now every single one of them will have.

  • So it has around 23 images per track.

  • So you can literally look at every single one of the images.

  • Now, let's say that for this particular exercise, what I wanted to do is I wanted to track pedestrians.

  • Pedestrians are my regions of interest.

  • And I'm going to be creating bounding boxes.

  • So.

  • So.

  • So.

  • So.

  • If I run this.

  • See up till whenever you annotated it.

  • Now you can actually see the people being followed by the same color.

  • So you are not going to get a different color bounding box every single time.

  • But you can actually link these pedestrians based off of their movement.

  • So now if this allows for you to generate the mods or any place where you actually need a specific object ID, like these two pedestrians, everyone will have a unique object ID.

  • You will now be able to run things like that.

  • So if you have aerial view videos or if you have street view videos, then this particular software is super useful.

  • So this is how, again, once you're done, you just go and hit submit.

  • And once it is submitted, you can actually go back and download it.

  • Go to the create page.

  • These were the video box and I can just say download.

  • And you see all the all the different for each and every one of the images, the annotations get downloaded.

  • Now, finally, what I also wanted to show you is how would you go about stopping?

  • So, you know, this this app is running and it's going to keep running on your on your local system until and until you literally stop it.

  • So in order to do that, what I'm going to be doing is I'm going to open another shell.

  • And there's this, you know, we need to check what are the Docker images that are running.

  • And.

  • So you see, this is the container ID that that it is running on.

  • So all I need to do now is a Docker stop and I'm going to copy the container ID and paste it here.

  • Once this is done again, this is not going to use.

  • You see, there's no more containers running.

  • So this will ensure that your system has stopped running and you have everything is stopped and paused for the day.

  • Finally, I would like to conclude by saying label me versus scalable.

  • Again, I found both to be equally useful.

  • I have had more experience with label me.

  • That's why I find that, you know, a little bit better or easier to use.

  • However, I can definitely see that if there is a new task or if it's a new, you know, if it's a new kind of images that have come out, maybe it's, you know, aerial images, aerial view images, or it's aerial view images for indoor 3D mapping sort of situations.

  • Then in those cases, using scalable is actually the proper method in which you can get a lot of good quality annotated data in a small amount of time.

  • And it is also very easy to get up and up and ready because of its dockerized format.

  • So definitely a thumbs up for scalable.

  • Do try it out and do leave me comments as to what you thought.

  • Do give it a thumbs up and like and subscribe to my channel.

  • So thank you and look forward to the next video.

Hello everyone and welcome to my channel.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it

B1 US

Video Annotation Software Scalabel Review: Installation to Annotation Guide for Windows

  • 0 0
    宜均陳 posted on 2024/11/15
Video vocabulary