Fiddler raises $10.2 million for AI that explains its reasoning

Explainable AI, which refers to techniques that attempt to bring transparency to traditionally opaque AI models and their predictions, is a burgeoning subfield of machine learning research. It’s no wonder — models sometimes learn undesirable tricks to accomplish goals on training data, or they develop biases with the potential to cause harm if left unaddressed.

That’s why Krishna Gade and Amit Paka founded Fiddler, a Mountain View, California-based startup developing an “explainable” engine that’s designed to analyze, validate, and manage AI solutions. A little over a year later, it’s attracted $10.2 million in a series A funding round co-led Lightspeed Venture Partners and Lux Capital, with participation from Haystack Ventures and Bloomberg Beta.

“Businesses understand the value AI provides, but they struggle with the ‘why’ and the ‘how’ when using traditional black box AI,” said CEO Gade, who added that the fresh capital brings Fiddler’s total raised to $13.2 million. “Businesses need to de-risk their AI investments, but most applications today aren’t equipped to help them do that. Our AI Engine is grounded in explainability, so those affected by the technology can understand why decisions were made and course-correct as needed to ensure the AI outputs are ethical, responsible, and fair.”

Fiddler’s genesis is in Gade’s work as an engineering manager at Facebook, where he and a team developed a system that explained why particular stories appeared on the News Feed. In a search for generalizable methods of imbuing AI models with interpretability, Gade reconnected with former colleague and classmate Paka, who at Samsung led an effort to derive insights from recommendations in the company’s shopping apps. Together with a team hailing from Microsoft, Twitter, Pinterest, and other tech giants, they developed the prototype that became Fiddler’s product.

Fiddler’s explainable engine, which it expects will launch in general availability next year, explains how and why models make predictions and empowers human auditors to ensure said predictions remain in line with expectations. A performance component lets users see how specific data regions affect the model and tweak inputs to minimize (or maximize) their influence, while a tracking module continuously monitors models to spot outliers, data drift, and errors and alert the appropriate person.

Fiddler says it’s working with several Fortune 500 businesses in banking and finance, financial tech, AI model risk management, and trust and compliance on active proofs of concept, but it has plenty in the way of competition. In August, IBM Research introduced AI Explainability 360, an open source collection of algorithms that leverage a range of techniques to explain AI model decision-making. Kyndi, a startup developing what it describes as an explainable natural language processing platform, raised $20 million in July. Microsoft and Google recently open-sourced lnterpretML and the What-If Tool for Tensorboard, respectively, a pair of tool kits for explaining black-box AI. And Facebook says it’s developed a tool to detect and mitigate prejudice in its AI systems.

But Lux Capital partner Bilal Zuberi believes Fiddler can carve out a niche for itself.

“We saw the value in what Fiddler Labs presents to the industry right away,” said Zuberi, who plans to join Fiddler’s board of directors. “We’re seeing more companies coming to us with business models based on leveraging algorithms for everything from recommendation engines and business analytics to medical diagnostics and autonomous vehicles. How can you make significant, mission-critical decisions without explainability and transparency? We’re excited about the market and potential for Fiddler.”

Must Read

error: Content is protected !!