Turn video into insight. Find relevant images across multiple types of video files. These are the two core challenges of large scale video surveillance systems. So, it is time to ignite old fashion passive video surveillance systems and explode your capabilities with powerful new tools for analysis of dynamic video surveillance content on next generation surveillance platforms.
When large scale closed circuit television (CCTV) systems aggregate many hundreds or more likely many thousands of hours of continuous footage of boring, static, content, how do you find exactly what you need from this massive archive of content? It is difficult, maybe impossible. Even after weeks of searching, you may still find nothing of value to your investigation. So, what can you do?
Locating the exact scene from a vast sea of video can be a frustrating and horribly daunting task if all you have are manual controls that demand endless forward, reverse, freeze-frame, searching and back and forth scrubbing of video. It is nearly impossible. Yes, you may have some basic metadata search tools like date and time of day, but if the system has hundreds of cameras, how can you review it all and maintain your sanity?
Luckily, vendors and developers have heard the call and are now providing more advanced search and retrieval tools to help locate and make use of footage.
Facial recognition permits tracking of suspects over time and place
Intelligent video analytics helps security and public safety organizations develop comprehensive security, intelligence, and investigative capabilities using video. You can use advanced search, redaction, and facial recognition analytics to find relevant images and critical information across multiple video files from multiple camera types. Selected live-streaming cameras, plus pre-recorded video ingestion from both fixed cameras and in-motion cameras are supported. Augment staff and improve camera investment by extracting information from captured video to uncover insights and patterns. These features are all possible with a modern intelligent video analytics platform.
Identifying targeted objects permits classification by age, sex, colour, height, and many other criteria to help track suspects more accurately and automatically
Supported Analytics
Virtual Tripwire
Tailgating
Facial Detection
Direction of movement
People Counting
Loitering
Vehicle Behavior
Vehicle Characteristics
Traffic Management
Object Left/Taken
Queue Management
Dwell Time
Path Map
Heat Map
Searching for suspect by facial recognition enhances search quality and reduces search times dramatically – the computer does the work, not you
Facial recognition
Enroll facial images of ‘people of interest’ in a watch list and the system compares them with faces captured by body cameras. High-quality matches are ranked for analyst review.
People search
Configure characteristics such as age, gender, ethnicity, facial hair, hair color, clothing colors and patterns, to find matches within the selected files from multiple types of cameras.
Detect changes to patterns
From live-streaming fixed cameras, receive automatic alerts when movement of objects is inconsistent with predefined patterns.
Redaction
Identify specific persons or objects plus the style of redaction and the system automatically redacts across one or multiple files. Manual control is available to redact specific content as needed.
Advanced analytics
Abandoned object, track summary and heat maps, and redaction are advanced video analytics based on years of IBM vision computing research.
Video to public safety insight
IBM Intelligent Video Analytics cuts through the vast linear process of video monitoring by converting video images to data.
Value with video analytics
Augment staff and improve camera investment ROI by extracting key information from captured video to uncover insights and patterns.
Create a security model
Customize the “monitor and alert” parameters from live-streaming fixed cameras to help identify perimeter breaches, abandoned objects, and more.
Find the buried pictures
Save time when searching for relevant images. Advanced content-based algorithms for detection improve time and accuracy of cross-correlation and trend analysis.
Combining ingress / egress search parameters with facial recognition reveals when suspects arrive and depart a location
The IVA Analytics Engine analyzes live video streams shared from the Security Centre and generates Event and Alert metadata describing activity occurring in the video. The IVA Metadata Engine stores and indexes this metadata. Alerts are forwarded to IVA clients in real time, while Events are searchable from the IVA web user interface (UI).
When an operator wishes to view video related to an Alert or Event, the IVA web UI requests the relevant video stream from Security Centre and displays it in the UI.
Body-worn cameras are generating millions of hours of content every week – but how can it be managed in a smart, predictable manner?
Body-worn cameras have been a growing trend in the law enforcement community for the last several years. Yet, as agencies worldwide establish body-worn camera programs, they are challenged with how to access, manage, protect, search and easily share that video. The hundreds to even millions of hours of video that agencies – depending on size – are capturing weekly is simply overwhelming them and complicates compliance with Freedom of Information Act (FOIA) and Criminal Justice Information Standards (CJIS) requirements. An intelligent video analytics platform can easily solve this problem.
Trends, patterns, and history logs from sensors, when combined with video analytics, greatly enhances searches and reduces time to locate specific content
When video analytics is combined with the Internet of Things (IoT) sensor data, its value is dramatically amplified. The IoT sensors that can evaluate a maximum / minimum thresholds of patterns and correlate trending data and generate new searchable metadata. This video and IoT collaboration is a powerful means to make sense of the content and add insight and perspective.
Metadata can classify a vehicle and then be used for tracking across multiple cameras and multiple platforms
Objects, such as a vehicle, can be classified through deep analysis to categorize and identify the year, colour, make, and model of the car. Once properly tagged, the subject vehicle can be tracked over multiple cameras and systems. Situational awareness of the subject vehicle drives better utilization of limited personal resources and helps to position countermeasures efficiently and effectively.
Using intelligent video analytics to better comprehend video surveillance content makes perfect sense. By converting the video to data and applying metadata to describe the media, we can make short work of searches, redaction, and other labour intensive tasks.
Using raw computing power, with a large storage archive, and an comprehensive analytics platform will deliver the return on investment quickly and make better use of your security team’s time to perform other more important work and be more visible to the public, which enhances the protection of your property, people, and assets.
About the Author:
Michael Martin has more than 35 years of experience in systems design for broadband networks, optical fibre, wireless and digital communications technologies.
He is a Senior Executive with IBM Canada’s GTS Network Services Group. Over the past 13 years with IBM, he has worked in the GBS Global Center of Competency for Energy and Utilities and the GTS Global Center of Excellence for Energy and Utilities. He was previously a founding partner and President of MICAN Communications and before that was President of Comlink Systems Limited and Ensat Broadcast Services, Inc., both divisions of Cygnal Technologies Corporation (CYN: TSX).
Martin currently serves on the Board of Directors for TeraGo Inc (TGO: TSX) and previously served on the Board of Directors for Avante Logixx Inc. (XX: TSX.V).
He serves as a Member, SCC ISO-IEC JTC 1/SC-41 – Internet of Things and related technologies, ISO – International Organization for Standardization, and as a member of the NIST SP 500-325 Fog Computing Conceptual Model, National Institute of Standards and Technology.
He served on the Board of Governors of the University of Ontario Institute of Technology (UOIT) and on the Board of Advisers of five different Colleges in Ontario. For 16 years he served on the Board of the Society of Motion Picture and Television Engineers (SMPTE), Toronto Section.
He holds three master’s degrees, in business (MBA), communication (MA), and education (MEd). As well, he has diplomas and certifications in business, computer programming, internetworking, project management, media, photography, and communication technology.
Awesome post, thanks for sharing.
Thanks for this amazing guide i really loved it.