{"id":89,"date":"2026-05-05T20:14:49","date_gmt":"2026-05-05T20:14:49","guid":{"rendered":"https:\/\/www.dataradar.io\/blog\/?p=89"},"modified":"2026-05-06T22:30:46","modified_gmt":"2026-05-06T22:30:46","slug":"snowflake-native-app-or-saas-token","status":"publish","type":"post","link":"https:\/\/www.dataradar.io\/blog\/snowflake-native-app-or-saas-token\/","title":{"rendered":"Snowflake Native App or SaaS Token. The Anodot Breach Made the Choice Easy."},"content":{"rendered":"
On Good Friday, April 4, 2026, deliberately timed to a bank holiday during the Easter\/Passover weekend to slow detection and response, more than a dozen companies had their data stolen. Snowflake did not fail them. Public reports indicate the intrusion path involved a third-party integration used by multiple customers. That is a data security story. It is also a data monitoring story. And for most enterprises running Snowflake today, it is a warning they cannot afford to ignore.<\/p>\n
Attackers stole access tokens from a third-party tool called Anodot. They used those tokens to reach the Snowflake accounts of more than a dozen companies. Snowflake itself was not hacked. The tool connected to it was.<\/p>\n
The headlines on early Monday, April 6, 2026, called it a Snowflake breach. That label was wrong. \u00b9Here is what the facts show. Anodot is a third-party AI tool. It connects to Snowflake customer accounts to run data checks. Attackers broke into Anodot’s own systems. They stole the access tokens Anodot used to reach its customers’ Snowflake data. Then they used those tokens to move into more than a dozen enterprise Snowflake accounts. \u00b9<\/p>\n
Snowflake’s own systems were not touched. No flaw in Snowflake was used. The breach point was the link between Anodot and its customers. Not the platform itself. The link.The ShinyHunters group later claimed credit. They said they had stolen data from dozens of companies. They also said they had access to Anodot’s systems for some time before anyone noticed.\u00b9<\/p>\n
This is not the first time this group has run this play. In 2024, they ran a nearly identical attack. That campaign hit AT&T, Ticketmaster, Santander, and Neiman Marcus, among others. \u00b9<\/p>\n
The method was the same both times. Find a tool that connects to Snowflake. Break into that tool. Use its access to reach the data behind it.<\/p>\n
Snowflake’s platform held both times. The breach point both times was the external tool sitting between the attacker and the data. This is a pattern. Not a one-time event.<\/p>\n
Anodot was a data anomaly detection tool. Its job was to catch problems in data. It did not catch the problem in its own systems. That gap exists in most third-party data monitoring tools today.<\/p>\n
The Anodot incident is worth a closer look, because the details matter. According to public reporting from BleepingComputer, TechCrunch, and TechRadar, Anodot is an AI-powered anomaly detection platform. Its job is to spot unusual patterns in data for its customers.<\/p>\n
Here is what the reporting tells us. Attackers stole authentication tokens from inside Anodot’s systems. The group known as ShinyHunters then used those tokens to reach Snowflake environments owned by Anodot’s customers. On April 9, 2026, Snowflake confirmed that Anodot was the third-party platform involved. A small number of Snowflake customer accounts were affected. Snowflake’s own systems were not compromised.\u00b9<\/p>\n
For enterprise buyers, the lesson is about architecture, not blame. Any tool that uses third-party tokens to reach your Snowflake data creates an outside dependency. If that outside system is breached, the tokens become the path in. A true Snowflake Native App takes that path away. The work stays inside your Snowflake environment, where it belongs.<\/p>\n
That is not a knock on Anodot as a product. It is an example of a core limit in how most external data tools work.
\nWhen a data tool lives outside Snowflake, it can only see what it is allowed to access via its API link. It cannot see what is happening to the link itself. It cannot see if that link is being used by someone who should not have it.<\/p>\n
The monitoring stops at the edge of the tool. But the risk does not.<\/p>\n
IBM’s 2025 Cost of a Data Breach Report found that third-party and supply chain breaches cost an average of $4.91 million per incident, making them the second most expensive breach vector in the study.2<\/sup><\/p>\n Verizon’s 2025 Data Breach Investigations Report put the scale of the trend in plain terms. Third-party breaches now account for 30% of all breaches. That is double the rate from two years ago.3<\/sup><\/p>\n In the United States, the average cost of a data breach hit a record $10.22 million in 2025. Supply chain breaches also take an average of 26 extra days to detect compared to other breach types. Every extra day is another day of open exposure.2<\/sup><\/p>\n Not every tool that calls itself a Snowflake Native App works the same way. Some are true native apps that run inside your Snowflake account. Others are connector agents that link your Snowflake account to their external application and process the data outside of your environment. The difference matters a great deal for security.<\/p>\n If you are shopping for a data monitoring tool for Snowflake right now, you will hear the phrase ‘Snowflake Native App’ from more than one vendor. It is important to understand that this phrase does not mean the same thing from vendor to vendor. A true Snowflake Native App runs within the customer’s Snowflake account. The code runs on the customer’s own compute, with no external processing of your data, greatly reducing your risk.<\/p>\n Most of the established data observability and monitoring platforms on the market today follow a similar architectural pattern. They deploy an agent or connector that authenticates to the customer’s Snowflake account, then pulls metadata, query logs, and, in many cases, sample data out of Snowflake and sends it to the vendor’s own cloud environment, typically AWS, GCP, or Azure, where the analysis, machine learning, and alerting happens. The results are then surfaced back to the customer through the vendor’s web application.<\/p>\n This pattern is common because it is easier for vendors to build and maintain. All customers share the same vendor-hosted analytics stack. But it creates a set of risks that customers often do not see until something goes wrong:<\/p>\n<\/div>\n\n Your data now lives in two places.<\/strong> Whatever the vendor pulls, including metadata, query text, sampled rows, schema definitions, and lineage information, is copied outside your Snowflake account and stored in an environment you do not control. You are now trusting the vendor’s security posture as much as your own.<\/p>\n <\/div>\n <\/li>\n \n The attack surface is now the vendor’s entire customer base. <\/strong> A single compromise of the vendor’s environment can expose data from every customer connected to it. The economics strongly favor the attacker: one breach, many victims.<\/p>\n <\/div>\n <\/li>\n \n You inherit the vendor’s compliance scope.<\/strong> . If the vendor’s cloud is the one processing your data, their SOC 2, ISO, HIPAA, and regional data residency controls are now part of your compliance story. A gap on their side becomes a gap on yours.<\/p>\n <\/div>\n <\/li>\n \n Network egress opens a door.<\/strong> Any architecture that requires data to leave your Snowflake account requires outbound network paths, firewall exceptions, and monitoring blind spots. Every one of those is a line item your security team has to review, approve, and maintain.<\/p>\n <\/div>\n <\/li>\n <\/ul>\n\n None of this makes these platforms bad products. Many of them deliver real value and have mature feature sets. But the architectural choice to run outside the customer’s environment is a choice with security, compliance, and cost consequences that every buyer should understand before signing.<\/p>\n A true Snowflake Native App has no outside cloud involved. No access tokens sent to third-party servers. No data moving outside your account. DataRadar\u2122 meets that standard.<\/p>\n Every tool you connect to Snowflake from the outside adds cost, lag, and risk. Most of that cost never shows up on a vendor comparison sheet. It shows up later, in engineering hours, compute bills, and incident response.<\/p>\n There is a financial side to this that rarely gets discussed in vendor demos. Call it the integration tax. Engineering time spent building and fixing the pipelines that move data to the external tool.<\/p>\n <\/div>\n <\/li>\n \n Compute costs for running the same data through two systems instead of one.<\/p>\n <\/div>\n <\/li>\n \n Middleware fees for the connectors that exist only to link Snowflake to the outside tool.<\/p>\n <\/div>\n <\/li>\n \n Incident response costs when a connection fails, gets exploited, or goes out of sync.<\/p>\n <\/div>\n <\/li>\n <\/ul>\n\n IBM’s 2025 CDO research found that more than 43% of chief operating officers name data quality as their top data concern. Over a quarter of companies lose more than $5 million a year due to poor data quality alone.7<\/sup><\/p>\n That number climbs fast when you add compute waste, integration lag, and the risk of a breach through a connected tool. Native Apps that are truly native operate end-to-end inside the customer’s Snowflake cloud. All execution happens on the customer’s compute, and no data is copied or transmitted elsewhere. The risks of external data handling are removed by design.<\/p>\n DataRadar\u2122 is built on Snowflake’s Native App Framework. Here is what that means in practical terms.4<\/sup><\/p>\n DataRadar\u2122 runs inside your Snowflake account. Every query runs on your compute. Results come from your live data. No data is stored outside the Snowflake cloud. Everything is a pass-through to your own Snowflake environment. Nothing is copied, cached, or held on a third-party server.<\/p>\n External tools depend on synced copies of your data. Those copies age. An alert that fires based on data from six hours ago is not the same as an alert that fires based on what is in your warehouse right now.<\/p>\n Native compute means DataRadar\u2122 queries run against live Snowflake data. When a pipeline health alert fires, it reflects what is actually happening. In fast-moving industries, that difference is the gap between catching a problem early and finding out about it from a business user.<\/p>\n Every query, every alert, every action that DataRadar\u2122 takes is logged inside Snowflake’s own audit system. There is no outside system holding a copy of your governance trail. Your compliance team can answer any access question from a single source of truth.<\/p>\n Snowflake replicates data across regions by default. A native app built on top of Snowflake gets that resilience automatically. When Snowflake is running, DataRadar\u2122 is running.<\/p>\n External tools have their own uptime record. When they go down, your monitoring goes down with them, even if Snowflake is running perfectly. The Anodot incident showed exactly this. When Anodot’s connectors failed, customers lost all visibility into their own Snowflake data across every region at the same time.1<\/sup><\/p>\n Most data monitoring tools solve one problem: either data quality or compute cost. Buying one means you still have a gap. DataRadar\u2122 covers both in a single native app inside Snowflake.<\/p>\n The data monitoring market split into two camps years ago. Data quality tools on one side. Cost control tools on the other. Neither camp was built to talk to the other.<\/p>\n<\/div>\n\n Here is how the market breaks down today:<\/p>\n<\/div>\n\n Data Quality Camp:<\/strong>A set of well-known data observability vendors are built to monitor data accuracy, freshness, schema drift, and pipeline anomalies. They are good at finding problems in your data. None of them were built to manage what that bad data costs you in Snowflake compute.<\/p>\n <\/div>\n <\/li>\n \n Cost Optimization Camp:<\/strong>A separate group of vendors is built to control Snowflake spending, right size warehouses, and reduce credit waste. They are good at finding problems in your bill. None of them were built to tell you why your data is wrong.<\/p>\n <\/div>\n <\/li>\n \n Trying to Straddle Both:<\/strong>A smaller group of vendors attempts to cover both camps. Neither approach does it natively inside Snowflake. Both require external connections and access tokens to reach your data. That is exactly the attack surface the Anodot breach exploited.<\/p>\n <\/div>\n <\/li>\n \n No Longer in the Race: <\/strong>Select Star, once a data catalog and governance tool, was acquired by Snowflake in November 2025 and is being folded into Horizon Catalog. It is no longer an independent option.<\/p>\n <\/div>\n <\/li>\n <\/ul>\n\n Most enterprises ended up buying one tool from the quality camp and one from the cost camp. That means two products, two setups, two outside connections, two access tokens, and two invoices. And the two tools still do not share a common view of what is happening inside your Snowflake environment.\u2078<\/p>\n The DataRadar\u2122 Observability Framework covers five dimensions inside one native app:<\/strong><\/p>\n These five areas are deeply linked. A pipeline failure that causes a data reliability problem will almost always cause a compute spike at the same time. Seeing all five in one place, against live data, is to connect the dots before the damage reaches a business user or a budget review. If you run Snowflake and you have outside tools connected to it, these four questions are worth asking before next week:<\/p>\n These are not abstract questions. They are the exact questions that data teams at more than a dozen companies wish they had asked before Good Friday 2026.<\/p>\n No security architecture can eliminate all risk. What a true Native App does is reduce exposure to specific third-party token compromise risks. DataRadar\u2122 runs inside Snowflake.. There is no external cloud involved.<\/p>\n The April 4th, 2026, Snowflake ecosystem breach was not a platform failure. It was a failure of the integration layer that sits between Snowflake and the tools bolted on top of it. Every outside tool connected to your Snowflake account is a risk point. One effective way to mitigate this class of risk is to run natively, inside Snowflake, with no outside systems involved. Confie built DataRadar\u2122 because distributing insurance at a national scale means your data must be trusted, protected, and AI-ready. We knew the problem. We built the fix.<\/p>\n<\/div>\n\nWhat Does Native App Really Mean<\/h2>\n
\n
\nSnowflake’s own Native App Framework has a clear definition.<\/p>\nWhat Do the Well-Known Data Observability Platforms Actually Do?<\/h2>\n
\n \n
Key Insight:<\/h5>\n
The Hidden Cost of Bolting Tools Onto Snowflake<\/h2>\n
\n
\nWhen a data tool lives outside Snowflake, data has to move. It gets pulled from Snowflake, copied into the external tool’s systems, and processed there. Every one of those steps has a cost.<\/p>\n<\/div>\n\n\n \n
\nCompanies that move to a native Snowflake platform tend to find that the true cost of ownership drops once they stop paying integration tax. The native platform may cost more on paper at the start. When you add up what you stop paying, the math almost always flips.<\/p>\nWhat Running Natively Inside Snowflake Actually Means<\/h2>\n
Your Data Never Leaves<\/h3>\n
Results Are Based on Live Data<\/h3>\n
Your Audit Trail Is Complete<\/h3>\n
Resilience Comes Built In<\/h3>\n
One Platform Instead of Two<\/h3>\n
\n
\n<\/picture>\n\n\n \n
\n
\nThat is what a unified native platform makes possible. One app. One setup. One security surface. One invoice.<\/p>\nFour Questions Every CDO Should Ask Right Now<\/h2>\n
\n
Key Insight:<\/h5>\n
Summary<\/h2>\n
\nThat is why we built DataRadar\u2122.<\/p>\n