Upload videos and images to Amazon Simple Storage Service (Amazon S3) to train the livestock detection model.
Use Amazon SageMaker Notebooks to process these videos and create a labelling job using Amazon SageMaker Ground Truth.
Split the annotated dataset into training and validation sets, and use Amazon SageMaker distributed training for livestock detection.
Use Amazon SageMaker Neo to optimize the livestock detection model for specific target devices like NVIDIA Jetson Nano, TX2, Xavier, AWS DeepLens , or Raspberry Pi.
Deploy the machine learning model and counting application AWS Lambda function to the edge device using AWS IoT Greengrass.
Consume live video streams from a camera at the farm using real-time streaming protocol (RTSP) through camera serial interface (CSI) or through USB connected to the edge hardware.
Run ML Inference on the video frames from step 6 and pass the bounding box outputs to the counting application Lambda function.
Connect to the web server running on edge devices and control when to start/stop counting through a mobile application.
Submit near real-time counts to an inventory management system.