Chooch: Using AI to Replicate Human Vision

Chooch is using AI to reduce costs and increase accuracy and flexibility while offering multi-purpose visual recognition capability

Published on

Read

2 min

Visual AI is a transformative technology that enables machines to recognize and catalog images and other visual data better than the human eye, usually by an order of magnitude. Massive datasets are used to train the AI to enable it to parse objects, actions, and states with increased accuracy, whether for public or business benefit.

Alumni Ventures Group portfolio company, Chooch, is taking visual AI to a new level with an end-to-end platform that can be deployed quickly at scale and adapted for multifunction capabilities. This is in contrast to most visual systems which are single-purpose. For example, it can detect whether strolling pedestrians are social distancing, spot who is not wearing a hard hat in a construction site, and identify flora close to power lines that could spark the next wildfire.

Chooch technology has the potential to solve problems exceeding human ability in the healthcare, geospatial data, industrial Internet of Things (IoT), safety, and security industries. Chooch has been deployed at scale to 18 enterprise customers and the U.S. government in areas such as aviation, media, geospatial, healthcare, security, retail, and industrial IoT. Chooch has also partnered with chip giant Nvidia to remotely deploy software in its next-gen IoT devices at the network edge.

Differentiated Solutions: Cloud or Edge

The Chooch AI generates clusters of interacting models with high accuracy and rapid deployment and generates up to 1,000 images per minute with video annotation, enabling base models to be built in hours. It executes every step of the visual AI process from data collection to annotation and labeling, training, model deployment, and integration.

Non-technical users are able to train, configure, and integrate the AI with business intelligence processes. The platform supports standard or custom models, which can be deployed in the cloud or on edge devices such as mobile phones, tablets, and cameras.

What We Liked About the Deal 

AVG’s Waterman Ventures (for Brown alumni and friends) and sibling funds deployed capital in a $20 million Series A led by Vickers Venture Partners with participation from 212 Venture Capital, Streamlined Ventures, and others.

We were impressed by several aspects of the company and investment opportunity:

Major Enterprise Demand: Chooch is gaining traction among clients given its wide variety of applications. It reduces costs and increases visual accuracy by offering 2,400 custom AI models, plus 200,000 pre-trained classes and 120 models.

Promising Market: According to Grand View Research, the 2020 global AI market was valued at $62.4 billion and is expected to grow at a compound annual growth rate of 42% by 2027.

Established Lead Investor: Lead investor Vickers Venture Partners is the largest non-government fund in Southeast Asia and has raised $1 billion across five funds. Deep Tech is one of its focus areas.

Want to learn more?
View all our available funds and secure data rooms, or schedule an intro call.

New to AV?
Sign up and access exclusive venture content.

Contact [email protected] for additional information. To see additional risk factors and investment considerations, visit av-funds.com/disclosures.