Blog

The Lost Puppy Problem: When Feel-Good Features Hide Surveillance Risk

Blog

It starts innocently. A neighbor’s dog slips out of the yard. Someone posts in a local group. A Ring alert lights up phones across the block. Cameras quietly scan fences, sidewalks, and driveways. A familiar shape appears, the dog is found, and relief follows.

It is an easy story to like, which is exactly the point.

Ring’s new AI-powered pet search feature is designed to produce moments like this. Gizmodo’s recent reporting highlights the feature, but the real lesson has little to do with dogs. It is about how serious questions around surveillance, consent, and data use get softened, and often avoided, through emotional framing. From an internal audit and risk perspective, the pattern is familiar. The technology is not new but the story is.

The Feature Is Not the Risk. The Narrative Is.

Ring’s feature uses computer vision to scan footage from nearby cameras to look for a missing pet. You do not need to own a Ring camera to activate it. Your neighbors’ devices do the work. On its own, that capability is neither good nor bad. Context is what matters.

Ring has spent years under scrutiny for expanding private surveillance, data sharing practices, and its relationships with law enforcement. Those conversations are uncomfortable and hard to defend in public. So the narrative shifts:  it’s no longer about surveillance. It is about community, about neighbors helping neighbors. And once the story is told  that way, objection starts to feel awkward, even unreasonable at times.

Anyone who has audited a fast-growing tech company has seen this move before. When the risk questions get difficult, the story gets warmer.

This Is a Governance Issue, Not a Privacy One

The real risk is not whether the system correctly identifies a golden retriever. It is whether new uses of existing capabilities trigger renewed oversight. Internal audit should be asking whether this use case was considered when video analytics were first approved, whether the consent model anticipated AI-driven scanning across an entire neighborhood, and whether users were clearly informed of how their data would be used or simply consented by default with an opt-out buried in settings.

This is classic scope creep. Only instead of financial controls drifting, it is data usage and surveillance reach expanding quietly. Too many organizations treat privacy reviews as one-time exercises. In practice, any new use of existing data, especially pattern recognition, deserves a fresh risk assessment.

 When Good Intentions End Badly

Good intentions do not reduce risk, in fact they often increase it. Helpful outcomes lower resistance, so governance relaxes, and no one wants to be the person raising concerns when the story ends with a happy reunion.

We have seen this pattern repeatedly. Security tools are reused for employee monitoring. System logs are repurposed for performance management. Location data collected for safety later finds its way into monetization strategies. The question is not why Ring built this feature. It is what else this infrastructure now enables, and who decides when the line has been crossed.

Default Settings Come With Implicit Risk 

Here’s one detail that should make any auditor uneasy. Features like this are often turned on by default. But defaults are not neutral, they are design choices with real risk implications.

When the onus is on users to opt out, the company is betting that convenience outweighs informed consent. That may be legally defensible, but it is rarely a sign of mature risk management. Internal audit should be asking who approves default-on features tied to data sharing, whether there is meaningful risk review or if decisions live solely with product teams, and whether defaults are tested against stated principles or just adoption targets.

When audit shows up only after public backlash, the damage is already done.

What Internal Audit Should Take From This

This is not about Ring alone. Any company deploying AI or large-scale monitoring should pay attention. Risks should be reassessed when new use cases emerge, as old approvals may no longer hold. Narratives should be audited alongside controls, because warm stories often hide hard tradeoffs. Defaults should be treated as high-risk decisions, since default-on is effectively implicit consent.

This is exactly where internal audit adds value early, before regulators or journalists force the conversation. At Cherry Hill Advisory, much of our work lives here, not checking whether a control exists, but whether leadership understands what it has quietly built. We help high-growth companies surface blind spots early and reduce unnecessary exposure to risk.

Final Thought

Lost puppies make great stories, and they make convenient cover. The job of an auditor is not to argue against happy endings. It is to make sure the systems behind them do not create risks that no one is prepared to own.

Today it is dogs. Tomorrow, it could be something much harder to explain away.