Return to site

AI Privacy in the Quantum Age: Integrated Quantum Technologies (CSE: ICS)

April 14, 2026

Enterprises in banking, healthcare, and finance run powerful AI models every day for credit decisions, fraud detection, and market forecasting. Yet most traditional cybersecurity stops at the perimeter, leaving sensitive data exposed to attackers.

This is the problem being solved by Jeremy Samuelson, Executive Vice President of AI and Innovation at Integrated Quantum Technologies (CSE:ICS). A former AI leader at Equifax and Mastercard, Jeremy invented VEIL™ - Vector Encoded Information Layer to deal with security weaknesses.

VEIL uses a breakthrough approach to mathematically transform raw sensitive data into anonymized vectors. These vectors preserve exactly the information the downstream AI model needs for accurate predictions while stripping away anything that could identify individuals or reveal proprietary signals.

The result? AI models receive high-quality inputs without ever touching original raw data. The solution is inherently resilient to quantum computers that threaten today’s encryption standards.

Jeremy explains real-world vulnerabilities in enterprise AI deployments, why the LLM hype doesn’t reflect the broader (and more critical) enterprise AI landscape, and how VEIL deploys easily on major hyperscalers like Snowflake, Databricks, AWS, Azure, and GCP with simple usage-based pricing.

For organizations that cannot afford data breaches in the post-quantum era, VEIL offers a practical, performant path forward. Early clients are already testing the technology, with more products in the innovation pipeline.

Listen to understand why privacy-preserving AI infrastructure may become table stakes for responsible enterprise AI adoption.

TRANSCRIPT FOLLOWS AFTER THIS BRIEF MESSAGE

Level up your investing with Sharesight, Investopedia’s #1 portfolio tracker for DIY investors. Track 240,000+ global stocks, crypto, ETFs and funds. Add cash accounts and property to get the full picture of your portfolio – all in one place. Ditch the chaos, track like a pro! Sharesight makes investing easy. Save 4 months on an annual paid plan at THIS LINK

Portfolio tracker Sharesight tracks your trades, shows your true performance, and saves you time and money at tax time. Save 4 months on an annual paid plan at this link

Disclosure: The links provided are affiliate links. I will be paid a commission if you use this link to make a purchase. You will receive a discount by using these links/coupon codes. I only recommend products and services that I use and trust myself or where I have interviewed and/or met the founders and have assured myself that they’re offering something of value.

EPISODE TRANSCRIPT

Phil: G' day and welcome back to Shares for Beginners. I'm Phil Muscatello. Today we're going to be talking about one of the biggest challenges for modern AI. How to use it without exposing sensitive data to the world, especially in the new age of quantum computing. My guest today is Jeremy Samuelson, Executive Vice President at Integrated Quantum Technologies, a company building quantum ready AI infrastructure for organizations that can't afford to make mistakes. Jeremy is a former AI leader at Equifax and MasterCard and a mentor at John Hopkins and the University of Texas, and the inventor of veil, a new privacy preserving technology designed for the post quantum world. And if that sounds complex, we're going to make it a lot easier for you to understand. Hey Jeremy, how you doing? Very well, thanks. Thanks very much for coming on the potty.

Jeremy Samuelson: Yeah, thanks for having me.

Phil: So let's just start with an overview. What problem is Integrated Quantum Technologies solving in the new world of AI?

Jeremy Samuelson: Yeah, absolutely. We're doing a lot of research to solve a number of problems that are facing AI systems, which is our primary focus. And our first product that we're taking to market right now, as you mentioned, is veil, which stands for Vector Encoded Information Layer, is actually a privacy preserving framework for protecting any sensitive AI inputs. So, you know, as you might imagine, maybe you're a bank or a hospital, you've got models that are doing a number of things and they may have to consume data that use, you know, sensitive information about your patients or your consumers that, you know, come into your bank. So any information derived from that sensitive information, you want to protect that and keep that safe while that's being fed into your machine learning model. So that's what our first product does.

Phil: So I don't think a lot of people realize this, that I mean, we're all sitting there using AI for completing the tasks that we like to be completing. But I don't think people understand that there's larger organizations out there that have got very sensitive data and there's some sort of interface with AI that's not particularly secure. Is that the case?

Jeremy Samuelson: Yeah. So to give a little extra context around that, there's always security around kind of a broader system. Right. So any of these banking organizations or hospitals, they already have a lot of cybersecurity people who put a lot of thought into protecting kind of the perimeter around their environment and m they'll put a ton of thought into locking down their database. But I found that machine learning models and AI models kind of represent a new challenge that traditional cybersecurity organizations are not really well prepared for. So you get deploying these models and in any kind of enterprise or organization, the thought process is more around how do we do it, how do we innovate, how do we build this machine learning system, how do we get this model deployed? There's not a great understanding around all the ways that those machine learning deployments actually make your data and your system vulnerable. That's a huge focus for us. So, you know, if you're an attacker and you're looking at, you know, kind of a bank from the outside, or you're looking at a hospital from the outside, and you want to get in and get access to all this like PII or payment card information, or protected health information or phi as we call it. You know, the database will be super locked down, the perimeter will be super locked down. But then many of them are finding unfortunately, that if they can just breach that perimeter, right? If they can just get into that perimeter, you find all these spots where you've got a machine learning model that's deployed and it's in what's called an API. It's in this prediction API. So you call it. You send data inputs in, it does what it does and then sends an answer back out. So now you don't really have to think too hard about like breaking into any additional systems, like cracking into the database or any of that stuff once you're in, because all these data inputs are flying around inside the system, right? They're leaving the database, they're being sent into these prediction APIs. So there are all these vulnerable points where attackers basically can just catch sensitive information as it's flying around. That's kind of more the security posture that I'm seeing is that everyone thinks about all these security measures around the perimeter and they don't really think about, well, what if anybody gets in, right? And if you ask them about that, like, well, what if someone gets in? They're like, oh, they won't. But, uh, it's uh, you know, it's a little naive, honestly.

Phil: So can you give us a real world example? Say a bank, what use would a bank be making of AI? And then how is that? Uh, that's sort of like a little breach that can be bridged.

Jeremy Samuelson: So banks use AI for all sorts of things. They use AI for credit

00:05:00

Jeremy Samuelson: decisioning, for Fraud detection. Investment banks actually use AI for a lot of their forecasting on markets and things like that. And in all these cases, they actually would care very much about keeping the data that they're using private. So, you know, if you have someone who's applied for a credit card at your bank or an auto loan or something like that, you would actually run a lot of details about them through a model that tries to predict is this person going to be a good credit risk or not. Right. Like, do they have a good FICO score? You know, you look at things like their ratio of, like their housing costs to their monthly income, their job history, all these different things, right? So all this is pretty sensitive information about people. So you would need to make sure that all that information that you've engineered as these inputs and sent out to your model API that's protected so that anybody who's again, trying to kind of intercept that information is not able to do that very similar kind of situation with fraud detection. So when I was at Equifax, we were building models for detecting identity fraud and again, in the context of new account opening. So a bank is not just going to look at, you know, do you seem like a good credit risk. They're going to have to make sure, like, is this person who they say they are? Is this actually a real identity? And so again, to be very certain that you're looking at the right person, that you're looking at a real identity, you've got to look at a lot of very sensitive inputs to, you know, to really be sure that you, you've got the right person, that you know who you're talking to or even in situations where again, you're just talking about, say, trading algorithms or things like that. Investment banks spend millions and millions of dollars researching those signals, right? Figuring out exactly how they're going to trade on all of these different signals. So they actually, you don't just have to think about the privacy of the data or the secrecy of the data, but also the secrecy of the modeling technique that's involved as well. Right. You don't want someone to break into your environment, basically extract what your model is and what your model's doing and what data it's consuming. And now they've just reverse engineered your whole trading process or your whole process for forecasting a market or forecasting what stocks are going to do. So there are lots and lots of situations within a bank where they would want to keep their model and the data it's using very secret.

Phil: So integrated quantum technologies was formed to address these problems. How long has the company been around for and when did it ipo and just give us a little bit of a taste of the history about where you came from.

Jeremy Samuelson: So I actually don't know a ton of the company's history. So, uh, the company's been around for longer than I've been involved with them. So they were integrated cyber solutions. They were taken public, I think something like three or four years ago on cse, which is a Canadian exchange. And they actually at the time were entirely focused on cybersecurity consulting and very focused on what they called the human firewall, which was, you know, you can have all this really cool, really nifty tech for keeping your enterprise environment secure if your employees still have very risky security behaviors or terrible password hygiene or whatever. Right. Like, you know, that's kind of the weakest link in the whole thing. So there was a big focus on that. I actually became involved with the company in August of last year when I was added to their Cyber Future advisory board. So the company essentially was looking for, uh, just a totally new paradigm shift, right? Just a totally new way of approaching what they were doing. I actually had this paper I had been working on. I had been researching this and developing this idea for between two and a half and three years. I actually did have working source code to kind of give a proof of concept of, of what it did and ended up showing the, uh, leadership and the board of directors. They immediately loved it and they were kind of like, this is it. So the companies had that new mission of focusing on post quantum AI infrastructure since, I would say, August, September of last year, even though the companies actually existed for, for far longer.

Phil: And that's the veil technology. Is it? What does that stand for again?

Jeremy Samuelson: Stands for vector encoded information layer.

Phil: Okay. And this is pretty much designed for the post quantum world. So presumably quantum computing is very close on the horizon and that it can do a lot more calculations than existing computing. Am I right about that? And is that where data breaches in the future going to be coming from? Because they've just got the power of crunching the numbers to such an extent, the criminal activists in this space, that current systems aren't built for purposes.

Jeremy Samuelson: Yeah, that's correct. So all of our encryption, which encryption is definitely the backbone of how we protect data today, all of our encryption really relies on this assumption that the calculations you'd have to do in order to break the encryption are just really, really hard. They're not impossible, they're just really, really difficult. And with quantum computing kind of still being on the horizon, it's at least getting serious enough, it's at least getting close that governments

00:10:00

Jeremy Samuelson: are starting to take this seriously. Uh, security organizations are starting to take this seriously. And as you say, they're a lot more powerful than traditional computing systems. And in fact they're so powerful that these computations that have in the past been really hard for breaking encryption are not so hard anymore. So that's the looming threat that they pose is that now all of our encryptions that we're using to protect sensitive banking data, healthcare data, government agency data, whatever data they're using, that all of this can just be cracked wide open by quantum computers. And kind of, however you think about it, I think there are a lot of differing opinions about that. Some people are like, oh, it's basically here or it's going to be here, you know, next year. There are other people that are a lot more skeptical and that are kind of like, like maybe. But you know, there are even some people that are so skeptical they would go so far as to say it's a sort of like the next fusion, right? That it's always going to be 10 years away or 15 years away or something like that. So however you feel about it, right, whatever your personal beliefs are, what's I think really cool about our technology is it actually is ready for systems that exist today. So you don't have to actually be worried about quantum threats. You don't have to be on any kind of quantum computing system. It actually works with existing systems that are out there today. It prevents all of the different vulnerabilities that machine learning models have today. But it also is resilient to any of those quantum threats that are, that are looming, you know, in many organizations. I think again, whether you believe it or not, whether you believe the technology is coming or not here in the U.S. right. The U.S. government at least takes it seriously enough. They've moved the deadline up to 2030, which it used to be at 2035. So any three letter government agency has to be post quantum by 2030. So we can essentially help all these organizations with that journey of becoming post quantum in terms of their MLOPs.

Phil: Ditch the spreadsheets. Sharesite is Investopedia's top tracker for DIY investors. Invest smarter, not harder. Grab four months free on an annual premium plan at sharesight.com sharesforbeginners so that's an interesting thought that the government has legislated that government agencies need to be prepared for Quantum computing by 2030. What form is that taken?

Jeremy Samuelson: So there are. Actually don't know if the legislation has been passed specifically, although I know it's a thing that the Trump administration has specifically talked about. So I'd have to go back and double check, actually, if there's been an actual executive order passed, which seems to be how we do things a lot these days, uh, or if there's, uh, if there's, you know, more permanent legislation in place. But, yeah, my understanding is that previously they all had to be Post Quantum by 2035, and now they're all talking like it's got to happen in 2030. DARPA's Quantum Readiness Benchmark also measures systems in terms of being quantum ready by 2033. So it's all kind of, you know, right around there. So, you know, very, very much like early next decade, you know, you can expect to see a lot of systems switching to post quantum technologies.

Phil: So the company's playing in the AI space, and a lot of people are saying, you know, we're in an AI bubble. What do you think? Um, what do you think of the shape of AI at the moment?

Jeremy Samuelson: Yeah, that is a great question. So, yeah, whenever I talk about this, this AI bubble, I'm actually very careful to be very specific about what do we mean by AI. So AI is a huge space, and for the most part, people have really only been really aware of AI for the past few years. Right. So ChatGPT was released in, uh, I think it was November of 2022, so not that long ago, and immediately became wildly popular. They hit 100 million users faster than any other application in history. So for many people, um, you know, when you ask them about AI or you say anything to them about AI, to them, AI is chatgpt, right? Or these large language models. But the reality is, whether you realize it or not, you have been coming face to face with AI algorithms every single day of your life for probably the last 10 or 15 years. So anytime that you use an app like, uh, Apple Maps or Google Maps or Waze, and it comes up with this path for you for how do I get from where I am to where I'm trying to go, and how long is it going to take me to get there? That's AI, right? There's an AI algorithm that comes up with the best path. There's another AI algorithm that then estimates, like, how long would it take to traverse this path? There are algorithms that are pricing stocks that banks and hedge funds have been using since the 80s or the 70s. Uh, actually a little Bit further back, um, anytime that you look up a house on Zillow and you look at the zestimate for that house, that estimate of how much that house is worth or what it should rent for, that's AI that does that estimation. So the whole field of AI is a huge, huge field of all kinds of stuff. Then within

00:15:00

Jeremy Samuelson: that you have machine learning, within that you have deep learning and then within that you've got large language models and uh, agentic AI. So when people look at these things like, oh, there's all this circular financing happening, there's all this stuff with OpenAI or anthropic. Are these guys going to be in business in a year? Are these guys going to run out of money? I'm always quick to kind of point out all these other algorithms that we've talked about today, things like credit decisioning models, fraud detection models. None of that has anything to do with LLMs, none of that has anything to do with agentic AI or any of that. So I actually am not going to speculate like uh oh, is OpenAI going to be out of business next year? I have no idea. I have absolutely no clue. But what I can tell you is if they are, that still doesn't touch about 85 to 90% of the AI use cases that are out there. So for us, if OpenAI goes out of business or Anthropic goes up in smoke or whatever, banks are still going to be using AI for fraud detection, they're still going to be using AI for credit decisioning, they're still going to be using AI for all the things they use AI for and they're going to need privacy preserving frameworks for all of those models. So. Yeah. Are we in an AI bubble? I have no idea. But uh, uh, but uh, you know, again, it's really more of an LLM bubble than a, than a broad AI bubble.

Phil: I remember a few years ago reading a book, it was about code, I think it might have actually been called code and about code breaking through the human history, you know, and right from the very first codes and. But it's always been a race between code makers and code breakers, hasn't it? The code breakers always end up winning, don't they?

Jeremy Samuelson: Yeah, eventually it's true. But I don't know, I think it's actually very cool. Right? It's a, it's a very exciting thing. In fact, in many ways, you know, my field wouldn't exist if not for that. Right? Alan Turing building his code breaking machine to break the Enigma machine back in World War II. I mean that many.

Phil: Fantastic story, isn't it, how they did that?

Jeremy Samuelson: Yeah. So fascinating. And that really was kind of the, the birth of like modern computing, right? Like modern computer science. So yeah, in some ways that arms race kind of drives, you know, the very existence of a lot of these fields and a lot of mathematical discovery as well. So.

Phil: So basically the clients of Integrated Quantum Technologies are um, government banks, basically large businesses that have a, uh, lot uh, at risk from um, cybersecurity, don't they?

Jeremy Samuelson: Yeah, correct. So those are our main focus areas. So financial services and fintech, hospitals and health care systems, insurance, pharmaceuticals, things like that. And then also government agencies and then defense and intelligence contractors that work very closely with government agencies and that might even be working on say, you know, classified programs and things like that. So those are very much the areas we're focused in. Interestingly, we've actually uh, come into some pretty interesting projects and some interesting clients. So we just kicked off one this week that was not actually based out of a need for privacy at all. So kind of the really interesting thing about Vail, and this was the main thing that pushed me to develop it, is if you look at other technologies that exist for privacy preserving machine learning and AI, so there are things out there that exist, right? There's differential privacy, there's federated learning, there's homomorphic encryption, there's all this stuff. And all of these systems actually ask you to make a trade off in that they degrade your model performance, right? Which as an AI scientist who was researching and developing a lot of these systems, right? To say model identity fraud and all that stuff. Very frustrating for me, uh, to have spent months talking to certified fraud examiners and digging through the data and finding all these great signals that tell you when someone's committ, say synthetic identity fraud, now it's time to deploy your model. You do have to do it in this privacy preserving way. And maybe you've got this really great model that's able to detect like 90% of all the synthetic identity fraud that it sees. But you put it under this privacy preserving framework and now it's just not that reliable anymore, right? It's like it's now only able to detect 80% or 75% of all the fraud that it sees. So super frustrating, right, to be the AI scientist who's developed the system and now you have to deploy it in this way that kind of, you know, nerfs it a bit. So I actually set out to build a system that no longer required Making that trade off and that actually preserved model performance. Turns out we were actually so successful in creating this that in many of the, uh, use cases and experiments that we've seen, it actually boosts model performance. It actually doesn't just, you know, keep the model performance where it at, where it's at. You actually see this little boost using Vail versus not using Vail. So we've actually even had customers come to us not for the privacy preserving aspect of it at all. They're kind of in this situation where they're like, hey, we have this model, we're trying to, you know, build this product or deploy this model. It's almost good enough for us to use it, or it's almost good enough for

00:20:00

Jeremy Samuelson: us to sell it to our customers. We just need this little boost of like another 5 or 6 percentage points in accuracy that we just can't seem to, to figure out how to get. You know, can you help us out with that? So we have one of those projects that actually just kicked off this week where we're actually the client's way more interested in the performance boost aspect of it instead of the privacy piece. So, yeah, kind of interesting that it's worked out that way.

Phil: So how does it achieve that performance boost?

Jeremy Samuelson: You know, it's really just the case that you're typically kind of obscuring the data, right? You're kind of, you know, you're adding encryption or you're adding noise, right? Or things like that, which just tends to make things harder on your model. So like, you know, with solutions like differential privacy, you actually kind of alter the training of the model and you add a lot of noise to a lot of these calculations that happen in model training, which again, gives you certain privacy guarantees. But now you go to deploy that model and now it has to get accurate predictions and it just has a harder time doing that because you've made the whole training process very noisy. Uh, homomorphic encryption does something totally different where it actually adds encryption to the inputs that are sent into the model and they're never decrypted. So you kind of have to figure out how to get the model to just directly ingest this encrypted data, which you can do, but it adds a lot of complexity, it adds a lot of overhead, a lot of additional cost. And both of these methods just make your model not work as well as if it had just been able to just see the data directly. So the thing that we ended up doing that is totally different. So we're not encryption. We, uh, don't add noise. So we actually are, I would call our process ica, which stands for informationally Compressive Anonymization. So we're actually just anonymizing the data, but we actually use a lot of principles from information theory to make sure that the specific information that is needed by the downstream model to make the predictions that it needs to make, that's all still preserved, and only that information is preserved. So any of the other information, any of the other utility that you might be able to get out of the data, like finding out a person's identity or finding out certain characteristics or features of about that person, like where have they worked before, or how much work history do they have, or how old are they, or, you know, things like that, all that information's actually been removed. So any, you know, bad actor who is looking to grab that information and exfiltrate it out of, uh, an organization's environment so they can use it later for all sorts of nefarious purposes, it's essentially useless for that. It's really now only useful for exactly the downstream task that you intended it for. So, which is also part of how it ends up being post quantum as well. There's no encryption to break, there's nothing to hack into or anything like that. It's just a completely different representation of the data that just doesn't have the information you want if you're a bad actor. Right? So it doesn't really matter how much compute power you have. So we kind of, in all of our research and theoretical modeling, we make this assumption that we say we have a computationally unbounded attacker. So your attacker is not compute bound at all. They have all the compute they could ever want. And basically, no matter how much compute power you have, you can't extract information that's just not there anymore. So that's how the whole.

Phil: So, um, I think that leads into. I've got a question here. How does VEIL allow AI to do its job without ever touching the original raw data? Is that what we're referring to here? That the original raw data is just not out there, uh, with the large language models in their data centers running their GPUs to come up with an answer for you? Is that the kind of way to look at it?

Jeremy Samuelson: Yeah, exactly. So you do have kind of your source environment which has your data repository. It could be a database or any number of databases. It could be a data warehouse, data lake, whatever. So that's where you keep all of that sensitive information. And the VEIL or the Vector, uh, encoded information layer sits in that environment with that database. And what we essentially do is we control all of the traffic for how machine learning models request information. So you have this model that's now out here. It could be in a cloud environment, it could just be in another partition or another part of your environment on prem, it doesn't really matter. That model is now no longer able to communicate directly with any database. It has to talk to that vector encoded information layer. And it says, okay, you know, I need to make a prediction. Please send me my inputs. Veil checks to see, is this coming from a registered model? Right. So we actually know that this request is coming from a legitimate place. We can verify all of that. Now, given all of that, this is a model we recognize. We have this model in our registry. We know what its inputs are supposed to look like. So it then applies all of the logic to grab all of that information out of the database. And then it applies all of the different mathematical transformations to that information. And now that veiled input,

00:25:00

Jeremy Samuelson: that transformed version of the data that's now mathematically anonymized is the only version of the data that's allowed to leave that source environment and enter the MLOPS pipeline or any part of the AI infrastructure. So that essentially now makes it so that, you know, you don't have to make these assumptions about how strong is your perimeter and can you keep bad guys out. You can actually go ahead and assume they'll definitely get in at some point. You can assume they're already in. And that just makes it so it doesn't really matter. There's just nothing, there's nothing in there for them.

Chloe: Super is one of the most important investments you'll ever make. But how do you know if you're in the best fund for your situation? Head to lifesherpa.com to find out more. Lifesherpa, uh, Australia's most affordable online financial advice.

Phil: So a bad actor with their quantum computer, what are they going to see when their quantum computers try and get into this space?

Jeremy Samuelson: Yeah, so once you crack into an environment and you start, you know, say, intercepting a bunch of information that was bound for a machine learning or an AI model. So the name Vector Encoded Information Layer kind of tells you it all gets encoded as what we call a vector, which basically is like this just array of numbers. So in principle, that's one way that it actually protects the information, is your original input could have been some kind of image. Right. Could have been like, uh, medical imaging or something like that. Could have been audio could have been very structured tabular data, could have been text embeddings, graph embeddings, it could have been any of that. But there's nothing in the veiled representation of the data that gives you any clue that it was anything in particular to begin with. It's all just a vector. And now that's basically all you see is, you just see that vector. That's just some number of numbers in a sequence. And what you were hoping for is that you would see these like, machine learning input features that would tell you things like, you know, this person's age or you know, how many years of employment history they have and like where that's verified and all these things that are maybe derived from their Social Security number, things like that. And now you don't see any of that. You actually just see this, you know, sequence of what looks like, you know, 8 or 10 or 16. Just totally random numbers that don't mean anything. Right. And now you'd have to try to come up with some way to reconstruct what the original input looked like. But that, uh, turns out to just be mathematically impossible. So all of our transformations are informationally destructive. So there's just not enough information present in that vector anymore that lets you reconstruct what the original input looked like.

Phil: And you did this all by yourself?

Jeremy Samuelson: Well, it took a very long time,

Phil: uh, a lot of number crunching. So you're good at math, huh?

Jeremy Samuelson: Yeah, so I am a mathematician. That is my background. So my degree is in mathematics. It's what I studied, it's what I love. So. And actually machine learning and AI, regardless of what anybody tries to tell you, is actually just very applied mathematics. So it's all, it's all just math under the hood. That's how it works.

Phil: So to give us an idea of the overall business model of Quantum Technologies is customers on subscriptions, licenses, pay per use. How does it work?

Jeremy Samuelson: Yeah, that's a great question. So we are pretty flexible to a number of ways that we can deliver it. So we've actually done this in a number of ways. In cases where the client has, say, a very custom environment, a very custom kind of setup that's needed, we're able to do that sort of like forward deployment where myself and members of my team get sort of embedded within that clients, data science team, MLOps team, and we help them get everything set up and we can help them figure out how to tailor it to their specific mlops pipeline. But for organizations that have kind of a much more standard looking Setup, it's actually really easy. So we are releasing apps very, very soon on multiple hyperscaler marketplaces. So we're talking about Snowflake Databricks, aws, Microsoft Azure, gcp, Google Cloud Platform and others. And all of these hyperscalers just offer a marketplace. So we actually can just package up that vector encoded information layer as a, uh, native app for whatever platform. M Then if you're say a hospital or a bank and you're on Snowflake, that's where all your databases sit is in the Snowflake environment. Then essentially you can just go onto that marketplace, download the Veil app off the marketplace. There are a couple of steps that require some manual configuration, but that's pretty simple to do. You have to be able to point it to like, where's the data source

00:30:00

Jeremy Samuelson: and where's the model that is being deployed that it has to talk to? And then once you've set that up, then you can kick off training to train the downstream model and set it up for inference. And then away, uh, you go. And then it's actually just usage based. So we, we charge kind of per gigabyte that's processed through it. So you really only get charged based on how much you use it. So if you don't use it that much, it doesn't cost you that much. And there are no, you know, big like upfront costs. Right. So there are competitors that we've looked at as we were coming up with our pricing model that it's just kind of crazy to, to get yourself set up, right? It's like $350,000 is like a platform fee just to get set up. And then, you know, per, per additional data plane, it's another 50 grand. And then it's like, you know, just all, all these big, big, big costs. And you know, I kind of just looked at that and I was like, I don't really want to do that to our customers. I want this to be as frictionless and as easy, you know, and as kind of straightforward and honest as it can be. So that's why we've gone with the model that we've gone with. But you know, we are pretty flexible though. So if we had a customer who came to us and said like, we don't really want our usage monitored, we'd just rather pay you some kind of, you know, flat license fee or whatever, like we're not going to not serve that customer, right? I mean, we'll be totally open to having the conversation, figuring out what the whole deal needs to look like to get set up really. But yeah, so there.

Phil: So it sounds, it sounds like it's pretty scalable.

Jeremy Samuelson: Oh, absolutely, yeah. Uh, so, you know, I don't really want it to be, uh, because, well, we of course offer support. Right. So if you run into any issues like, you know, on Snowflake or AWS or whatever, like you can call us or contact us and we offer support. Support. But I really want it to be just as easy and as frictionless as possible. So, you know, it shouldn't really take that customers have to call us up or go. We don't have to go through any kind of like lengthy sales cycle or anything like that. They can just, you know, decide that they want to use it and they can be up and running in a few minutes and off, uh, you go. So.

Phil: So Jeremy, can you describe to us the phase that the business is in at the moment about, you know, existing revenue and, um, you know, the money it's going to be costing to develop the Vail product and bringing it to market?

Jeremy Samuelson: Yeah, absolutely. So we're very early on in our go to market strategy. So we do have some clients, uh, again, like I mentioned, I think earlier, we have one that we just kicked off with this week that's based here in the US we have another very large overseas client that we're working with that's in the healthcare space. So we have some early nibbles. Right. Some early clients that we're, we're bringing on board. We have some partnerships that are also lining up. We're kind of getting invited to join a lot of these very big partner networks that I think are going to, you know, really get us in front of a lot of clients and cause that whole thing to really explode. And we're also very, very hard at work. So my, you know, one of my jobs specifically is leading our innovation center. So we're very much like a research first organization. So Vail is just the first of many, many things that we've got coming down the pipeline. So, you know, as we continue to do more research and development and pump out more products, you know, I anticipate that you're going to see a lot of exciting stuff coming out of our lab here in the next six months or so.

Phil: What does the competition look like? I mean, you described the cost that some of your competitors charge for this kind of thing, but do you know who the competitors are and how much Vail changes the business model?

Jeremy Samuelson: Yeah, absolutely. So there are a few that I kind of keep my eye on. So interestingly I found out about these guys after we picked our name. Uh, there's uh, another organization called nvail and they according to their website, are entirely focused on homomorphic encryption research. So they're putting a lot of work, a lot of research kind of into, you know, they're sort of betting on that horse which personally I don't really see the, the potential in that. I think homomorphic encryption, where it relies so much on heavy encryption and it's traditional encryption so they're not post quantum as it is just trying to apply the types of encryption they're applying now. The system's slow, it's very expensive, it's very bulky, it's got a lot of, you know, a lot of issues and they also haven't solved the kind of performance trade off problem of that your model reliability goes down, you start adding actual post quantum cryptography technology to that and I think the whole thing just is uh, going to collapse under its own weight. So that is, you know, so I'm interested in kind of seeing how they fare, but I don't worry about them too much. The other really big one is Capital One actually has a tokenization based solution called databolt for them. I think, you know, there's actually some, they probably do have some legs coming from Capital One, right? Coming from kind of a big established organization that everybody you know has, has uh, heard about. But tokenization is not really a new solution. It's a thing that exists and has existed for a long time. In fact there are other organizations I've worked

00:35:00

Jeremy Samuelson: at that that is their primary solution for how they protect data that you're using for analytics and for machine learning and AI and things like that. Yeah, it still very much has a number of issues. The, the tokenization is invertible, right. You're able to undo it if you want. And it does have a better like model performance trade off than things like homomorphic encryption or differential privacy. But it definitely uh, does not. I've never ever seen it exhibit this property of that it actually boosts model performance. So I think that that's something that is really special about us and that everybody else in this space kind of has to watch out for. Right. So you know, people are going to be I think interested in us for the simplicity and the kind of frictionless way that they can just onboard themselves and protect their ML pipelines and ML ops systems. But also I think there are going to be plenty of customers that find their way to us that don't even care about privacy. They just need that boost in performance. They just want that better model. Right. They want that better model performance. And we're able to provide that to them.

Phil: So you referred to hyperscalers. And these, I guess, are the large companies that are growing their AI capabilities, uh, in a large way at the moment. And when you say partnerships with them, does this mean that they would be recommending Veil as a product or they would be deploying Vale on their end? Is that the sort of thing that's happening?

Jeremy Samuelson: Yeah, absolutely. So in many cases, they're recommending us to clients in their network. In other cases, we're actually another, you know, kind of line item on what they're selling. It kind of depends on the type of partnership. Some of them are very, very focused in hardware. And so if Vail actually helps to sell their hardware, then we get kind of just like put into the. Bundled up, into the whole package that gets sold to the client. So, yeah, there are a number of ways that partners are able to implement this. It could be reselling. It could be that they're sort of like a channel partner. They're able to, again, go directly just implement us in the environment of their clients. Or it could just be, uh, as simple as a recommendation or something like that. So, yeah, all of the above, really.

Phil: So remind us of the ticker again, the code of the company, and also how people can find out more information about integrated quantum technologies.

Jeremy Samuelson: Yeah, you bet. So we actually still have our old ticker, so it's ics and we're on

Phil: the cse, which is the Canadian Stock exchange, isn't it?

Jeremy Samuelson: Yep, yep.

Phil: Okay. Jeremy Samuelson, thank you very much for joining me today.

Jeremy Samuelson: Yeah, thanks for having me.

Chloe: Thanks for listening to Shares for Beginners. You can find more@sharesforbeginners.com if you enjoy listening, please take a moment to rate or review in your podcast player or tell a friend who might want to learn more about investing for their future.

00:37:39

Any advice in this blog post is general financial advice only and does not take into account your objectives, financial situation or needs. Because of that, you should consider if the advice is appropriate to you and your needs before acting on the information. If you do choose to buy a financial product read the PDS and TMD and obtain appropriate financial advice tailored to your needs. Finpods Pty Ltd & Philip Muscatello are authorised representatives of MoneySherpa Pty Ltd which holds financial services licence 451289. Here's a link to our Financial Services Guide.