It is important that you make it easy for users with different levels of experience to use your ML model package in AWS Marketplace. To do this, we recommend that you include the following elements as part of your Model Package listing.
# | Section | Description | Mandatory/Highly Recommended | Sample example |
---|---|---|---|---|
PD1 | Product Description | List most important product use case and supported input content type (short description) | Mandatory | Automatic Detection & Recognition of Vehicle License Plate from an image using Deep Learning ML Models |
# | Section | Description | Mandatory/Highly Recommended | Sample example |
---|---|---|---|---|
PO1 | Product overview | List most important use case(s) for this product | Mandatory | The sole intention of this product is to find the most efficient way to recognize the number plate information from the digital image. This process involves detecting a vehicle, localizing the license plate and then segmenting & recognizing the characters from the license plate. |
PO2 | Product overview | Differentiated capabilities of model/algorithm | Highly Recommended | The use of synthetic data helped to greatly improve the network generalization, so that the exact same network performs well for License Plates of different regions around the world. |
PO3 | Product overview | Summarize model performance metric on validation data | Mandatory | We used the OpenALPR dataset as the test set to evaluate the accuracy of the proposed method. Our system achieved an average accuracy of 89.33%. We evaluated the system in terms of the percentage of correctly recognized license plates, considered correct if all characters were correctly recognized |
# | Section | Description | Mandatory/Highly Recommended | Sample example |
---|---|---|---|---|
H1 | Highlights | Describe the data (source and size) it was trained on and list any known limitations | Mandatory | Vehicle Detection Model: Trained on Pascal VOC 2007-2012 dataset & accepts images of any size, resizes them to 416x416. The images selected for training this Custom CNN for license plate detection included mostly European & American and some Brazilian and Taiwanese license plates. The dataset was created with 200 annotated images from the Cars, SSIG & AOLP dataset. Also about 4000 augmented images were used. The training dataset was considerably enlarged in this work by using synthetic and augmented data to cope with License Plate characteristics of different regions around the world (Europe, United States and Brazil). |
H2 | Highlights | List the core framework that the model/algorithm was built on | Highly Recommended | Uses YOLOv2 Object Detection Network as a black box, merges the outputs related to vehicles (cars & buses) & ignores other classes. License Plate Detection Model. License Plate OCR: The character segmentation and recognition over the rectified license plate is performed using a modified YOLO network. |
H3 | Highlights | Specify latency metric and/or transaction per second on recommended SageMaker compute instance. For algorithms, share any relevant benchmarks. | Mandatory | The inference time is highly dependent on the number of vehicles detected in a single image. The avg response time for a single image single vehicle inference on the compute optimized ml.c5.2xlarge instance with 8vCPUs & 16GB Memory is approximately 3.25 secs. |
H3 | Highlights | Applicable research paper/repo related to model/algorithm | Highly Recommended |
# | Section | Description | Mandatory/Highly Recommended | Sample example |
---|---|---|---|---|
UI1 | User information | Mime-type for input data | Mandatory | Supported content types: image/jpeg, image/png |
UI2 | User information | Format and description for inference input (text) | Mandatory | This model accepts images in the mime-type specified above. |
UI3 | User information | Input data limitations (text) | Mandatory | The image must be at least 416x416. Model resizes the image to 416x416 before performing the inference. |
UI4 | User information | Mime-type for inference output | Mandatory | Content type: text/plain |
UI5 | User information | Format and description for inference output (text) | Mandatory | For this license plate image , the ML Model returned following output. Sample output: KL40L5577. If your output is complex, here is the sample description of output for your reference. The model returns JSON object detections that includes an array with individual elements for each face detected. Each element has two attributes: 1) box_points: includes the bounding box pixels of the detected face. The first value represents XX, second value represents XX, third value XX, and fourth value XX. 2) classes: no_mask represents the probability score that the bounding box does not include mask. When multiple faces are detected in the image multiple inferences are returned as part of the array... |
UI6 | User information | Provide example to pre-process data (text) | Highly Recommended | |
UI7 | User information | List any custom arguments accepted by model during inference. | Highly Recommended | E.g. GluonCV YOLOV3 Object Detector provides custom attributes via following description. The confidence score threshold can be configured in range (0, 1). Here is the AWS CLI command for invoking endpoint aws sagemaker-runtime invoke-endpoint --endpoint-name your_endpoint_name --body fileb://img.jpg --content-type image/jpeg --custom-attributes '{threshold: 0.2}' --accept json >(cat) 1>/dev/null |
UI8 | User information | Specify recommended method of using an Endpoint i.e., Real-time or Batch Transform Job. Include AWS CLI code example to invoke model. (text) | Highly Recommended |
# | Section | Description | Mandatory/Highly Recommended | Sample example |
---|---|---|---|---|
AR1 | Additional Resources | Provide a validated notebook, data and other resources in Git compatible repo (see attached). Note, this notebook and sample data will also be verified by AWS Marketplace team | Mandatory | To clone the Repository with Sample Notebook, Input and Output Samples |
AR1 | Additional Resources | Sample inference input data for real-time invocation (text or link on Git) | Mandatory | https://gitlab.qdatalabs.com/quantiphi-sagemaker-marketplace-examples/vehicle-license-plate-recognition/blob/master/data/input/real-time/ |
AR1 | Additional Resources | Sample inference input data for batch invocation (link on Git) | Mandatory | https://gitlab.qdatalabs.com/quantiphi-sagemaker-marketplace-examples/vehicle-license-plate-recognition/tree/master/data/input/batch |
AR1 | Additional Resources | Sample inference output for real-time invocation for the input sample provided (text or links on Github) | Mandatory | https://gitlab.qdatalabs.com/quantiphi-sagemaker-marketplace-examples/vehicle-license-plate-recognition/tree/master/data/output/real-time |
AR1 | Additional Resources | Sample inference output for batch invocation corresponding to the batch input samples (text or links on Github) | Mandatory | https://gitlab.qdatalabs.com/quantiphi-sagemaker-marketplace-examples/vehicle-license-plate-recognition/tree/master/data/output/batch |
AR2 | Additional Resources | Links for additional resources such as architecture diagram, related listings to integrate model with other applications and services. | Highly Recommended | A blog-post or a link such as this which explains architecture as well as process for using the model in a real world application : VITech Lab Healthcare introduces Automated PPE compliance control on Amazon Web Services |