Training The future with Mark Russinovich

Sponsor FAFOFM
Don't FAFO with cloud backups. Do it right with Arpio.
Have a podcast you need help with? Reach out to HumblePod.
Mark is the CTO of Azure and has decades of experience exploring the internals of systems. As a developer and person responsible for running massive AI infrastructure, Mark has a lot of insights and ideas. We talk about the the end of SaaS, training engineers, and a whole lot more on this episode.
Welcome to foreground and find out.
I am Justin Garrison.
And this is a very special, our first in-person recording with anyone.
We are at scale 23X.
I'm here with Autumn Nash as always.
Welcome Autumn.
So excited to be at scale.
It's nerd summer camp.
It is nerd summer camp.
And our very special guest, Mark Russinovich.
Thank you so much for joining us.
Thanks for having me.
CTO of Microsoft Azure.
That's right.
You have a keynote tomorrow and I want to go all into the details and what you're talking about.
I actually really want to do one thing as a personal thing first.
Where I know you are very famous for Windows system internals.
I read your zero day book that you published 12 years to 13 years ago.
You have a series of books, but I don't think a lot of people know about.
I listen to them.
I listen to audiobooks, but I only buy physical books when I have met the author in person.
So if you wouldn't mind signing that, I would love it.
I literally just read it again last week to refresh my memory on the book.
So zero day, source code.
What was it?
There's three of them.
Zero day Trojan horse and road code.
Road code.
That was it.
It's all the Josh or Jack.
Jeff Aiken.
Jeff Aiken, man.
Yes.
Man, I just read it and I love them.
It's super fascinating, especially thinking back that that was 13 years ago.
And you were already talking about what a big warning that I took the second time listening to it was what happens when we lose control of technology.
What happens when we don't have the manual?
What happens when we don't know what's going on?
And that was a big warning out of this book.
How does that play out in 2026?
I think it's going to play out.
I'm like, how do we, how do we get back to that?
How do we get back from that of this might be worse, right?
Like more people don't know what's going on and they're trusting computers even more of, you know, every warning in the book of like, yeah, you don't have manual controls to turn off the reactor.
What's going to happen?
Well, I think, you know, when it comes to reactor people putting in good mitigations and, you know, fallback systems or something like that.
It's kind of the everyday stuff.
Like, you know, that when the airline baggage claim system goes down and that all travel around the world starts to stop because London Heathrow's baggage claims aren't working or the airplane routing system fails and that can't come back.
I mean, one of the, one aspect of this is that when it comes to those kinds of failures, which have typically been due to configuration issues or bad rollouts or overload of some kind, is that we've got a lot of mechanisms in place that we've developed over time to be resilient to those kinds of problems.
So, and those patterns are going to become more and more ingrained in the systems as they get built rather than everybody having to go make it up as, you know, make it up every time as they design something new.
So there, I do see hope when it comes to our systems getting more resilient.
The risk, I think, is that the systems are getting just incredibly complicated.
And if you take a look at any one of those failures from those large scale systems, it's usually not just one problem that happened.
You know, when, especially when I look at failures in Azure, it's not one issue, but it's actually, this issue happened and this issue was going on at the same time.
And this other issue took place.
The combination of this perfect storm.
And that's just because the systems are so complicated and it's hard to defend against that.
Is simpler systems the answer?
I don't think they can be simpler.
You know, this kind of takes me to when I'm designing something in Azure.
I've been designing things with people in Azure and you present other people the design and they go, wow, that's really complicated.
And I'm like, okay, where can you simplify it?
Because I think it's as complicated as it has to be, because there's lots of problems that are just inherently complicated.
And so you don't want to overcomplicate things.
But at the same time, there's some floor to the complexity that you can't get beneath.
Otherwise, you can't solve the problem.
Especially at a certain scale.
Yeah, it's like solving simple problems can be simple.
But I always tell people that like the most complicated system I've ever seen was the back end of Lambda, right?
Like when I worked at AWS and I was just like, oh, this is so easy to use.
What's the back end?
I'm like, oh, my gosh.
Like that's people, I work on Kubernetes and people were like, Kubernetes is complicated.
Like, hmm, people can build some really complicated stuff.
But the interface can be simple, similar to like a car, right?
Like look under the hood and like nobody knows, you know, like what percentage of people can actually fix what's under their hood today.
There's an abstraction up at the front, which is put it in a drive.
Yeah.
And it just goes.
Yeah, you put it in your steering wheel and hopefully you might use a blinker.
Do you think abstraction is going to be what makes the risk bigger?
Because I think like when you look at the abstraction of cloud, when people forgot that it was just a bunch of Linux servers, like I think we keep extracting away.
And I know that we, you know, we obviously don't write assembly anymore, but I, not all the time, but you know, I feel like sometimes when you get farther and farther away and have less context, it gets to the point where you don't fully understand how it all works.
So like, how do you see us kind of navigating this new abstraction and high velocity in which that we're going to be working with AI?
So first, I think these layers of abstraction, we've been growing them layer on layer over time, and we reached a kind of level of levels of abstraction that are too deep for anybody, one person to really understand everything.
When something happens in Azure, it requires a lot of information to figure out exactly what's happening because it's across multiple layers of the system and different components of the system.
All because at the top, there's an abstraction that people do understand.
You know, actually, it's kind of funny because Scott Hanselman and I gave a talk at Build together on the layers of abstraction that built up over time, starting with an LTR where you're programming little lights.
And now, you know, then all the way to cloud computing, which is orders of magnitude layers of abstraction that's been put on top of what started in the 1970s.
But I think what you're referring to is the layers of abstraction that's coming into place right now, which is AI, which is another systems or executing tasks when you're doing it with agentic AI.
And that layer of abstraction removes us even more.
You don't really understand what the LLM is doing or why.
I think that we're figuring that out.
And I think we will put in guardrails so that the AI can't go off the rails and do something that it shouldn't do or something that's going to be problematic.
As well as enough monitoring and logging to identify when there is a problem and be able to figure out what the problem happened.
I mean, this is what we've done over time as we build these layers of abstraction, you put in guardrails, you put in logging, you put in systems that understand how to get through that.
And in fact, AI is going to help us with that, too.
And you think that so you think there'll be like a observability moment for LLMs so we can understand?
I mean, that's like observability and cloud and stuff is all about understanding the system, whatever state it's in.
And being able to do that at an LLM level of right now, we make it work and we don't necessarily like, how did we get this output or what exactly is it doing?
But we'll be able to observe the state of the system and maybe influence it or at least protect ourselves from it.
Yeah, or understand when something went wrong, what happened?
I think we haven't reached that moment of really understanding because agents are relatively still new and agent frameworks and orchestration systems are still new and what we're trying to do with them is still new.
We're still figuring out what can you, what's good for, you know, what kind of problems are good for agents?
How would you define an agent?
AI based system that makes autonomous decisions.
Autonomous decisions?
To carry out a task.
How does that, how do tools come into that?
I've heard people say like it's an LLM with a tool.
Well, actually they can use, I mean, to carry out a task, you need to have some effect on the outside world, you know, outside the system.
So yeah, tools would do that or skills or whatever, you know, calling them CLI or whatever.
Whatever branding we want to put on top of it.
Yeah, MCP, whatever.
I love the paper that you and Scott wrote together.
And I think that it's like the elephant in the room that a lot of people have ignored and it really gave me like hope when you two really focused on that and were willing to come out and like talk about it, you know, and have the conversation that people weren't having.
For people who are kind of like still growing and learning it as developers right now, and even people that are maybe coming in, how do you think that we're going to like, like how you decide we're still learning, right?
How to use these things?
How do we continue to like develop and teach engineers how to like engineer and to grow in the same manner that we were expecting before, but now kind of in this new AI age, like how do they still get the context?
How do you go from like a junior engineer to a senior engineer in this new world?
And like, how do we give them those skills and like mentor and teach them?
And because we don't really know like what the senior engineer is going to look like in five years, you know, everything's moving so fast.
Yeah.
Also, the paper you're referring to is one that Scott and I published in Communications at ACM that's went online.
It's going to be in the print issue in April, but it's about what we should do to head off the talent pipeline problem that I think we're heading towards, current course and speed.
And backing up, this came as a realization to me early to mid last year as I started to use coding agents, which showed up in the scene and around February of last year.
And as I was using them, I found that my productivity boost from AI assisted coding went from 1.5 to 2x to 5 to 10x.
And this is just...
How do you measure it?
It's, you know, it's very ad hoc.
5s?
Yeah, it's pretty much 5s.
I didn't do AB testing, but I did the, you know, I looked at one of the projects that I made, which was, I mean, I've made a whole bunch of projects.
One is a Chrome extension that goes and scrapes New York Times, Wordlebot logs or data to show you your trends over time and to show you individual games.
It makes it much easier, more easy to navigate.
I've never done a React UI.
I've never written a Chrome extension.
I've not really used written JavaScript.
I haven't used node packages.
And this thing is entirely built on that and it works and you can go get it from the Chrome store.
And when I look at that, if I had to do that pre-AI, first of all, I would have to learn those things.
And that would have taken me days, weeks, maybe even months to learn to the point where I could understand what I was doing and actually execute it.
And the second thing is just the work of coming up with the design, typing it all in.
That would have also taken a lot of time.
It took me about two weeks just part-time prompting to develop this thing.
And then another few days of part-time prompting to polish the UX and add some things to it.
But again, I think easily that's five to 10x.
And I say five, it's probably more like 10, but I'll say five just to be conservative because, you know, I don't want to be wrong on the other side of it.
But in any case, I did project after project like this where I estimated that my boost was in somewhere in that range.
But when I looked at what I was doing, I was micromanaging the AI because the AI was making mistakes, gaslighting me, doing the wrong thing, getting stuck.
And I would have to prompt it and guide it and correct it nonstop.
I mean, and I've been collecting every time I do agentic coding, every time the AI does something ridiculous or incorrect, I grab a screenshot of it.
So and I've got just dozens.
Every time I sit down in a day and I spend an hour coding with the AI, I get at least one or two more of it's like, what the hell?
You know, the test is working when it's not working, the race conditions fixed when there's a sleep statement in it.
I take a note by it and then I add like whatever I can to my copilot instructions so that way I can get it to stop like falling into the same cycles over and over again.
But you've had to unstick it.
A lot, yeah.
One of my, and I've just got so many hilarious stories from AI.
One of them, and these are all frontier models, you know, Opus 4 and DPT 5 and higher kind of stories.
One of my favorites was it's, I have it write these tests, you know, everybody's got to write the test to make sure it's not regressing and that you understand that things are making forward progress.
And the AI can also look at the test and see when it screws up.
And at one point I'm like, okay, time to go look at the test.
So I go look at the test and one of them, and they're all passing and they print some stuff.
One of them, I look at the file and it's just print statements.
It's like, if the test was written, it would do this.
Just, I was floored by that.
I've had it do things like that too.
I'm like, let's, you know, copy this test from this open source project so we can try to like build like a POC of a test and first like service.
And it was just basically copy and pasting it instead of taking it and actually pulling it down.
And it just made its own when it was too hard.
Yeah.
And I was like, this is not the same.
Yeah, it is.
It is lazy.
It'll take shortcuts.
But in any case, what I, what I was realizing is a few things.
One, AI was doing stuff that typically early in career, new developers would learn from as they come into a job and they write the unit tests or they fix the bugs.
And it's like, AI can go do those things now.
And so one issue is why even hire junior engineers if AI can do their job?
And the second thing I realized as I was doing this is I was getting this five to 10x boost because I have decades of software engineering experience.
I've learned the hard way by spending hours on some stupid bug that, you know, you finally crack and you're looking all around and trying all sorts of techniques to figure out what's going on.
That has given me the experience and insight and what Scott likes to call a code taste to be able to push the AI forward in the right direction and unstick it and see through its gaslighting.
But a junior engineer doesn't have that experience.
And in many of the cases that I'd run into, I'm like, I only know that this is wrong because I have done it and a junior engineer is not going to know.
So they're not only probably not going to get this boost that I'm getting, they might even get a drag because they're going to run into a problem and they're not going to know how to get the AI past it.
And so if the incentives then build up to A, don't hire junior engineers and B, even if you do, how are they going to learn when they're using AI to write everything and they don't really know what they're doing at some point?
I mean, you know, yeah, they might get some.
And by the way, they're probably getting a superficial boost and there's a lot of AI slop underneath the hood that is going to cause a downstream impact.
They don't know the difference.
They don't know.
So, you know, if we stop hiring junior engineers, obviously five to 10 years from now, we've got a huge problem.
But how do you grow junior engineers if hiring them actually is going to slow you down?
I even think it's going to be interesting when it comes to growing mid-level engineers and what being a senior engineer means because now it's a different type of mentorship on how to grow mid-engineers and how to be a senior engineer, you know?
Well, so, you know, and that's a whole different, you know, exploration.
But in the first place, I came to the conclusion that the only way to actually keep the type of pipeline going is apprenticeship model.
We're actually hiring them to grow them, not hiring them to go do the grunge work that now AI is doing for you, that they would normally would have learned from.
So how do you make them learn?
You can't let them use AI all the time.
You've got to make them, you know, exercise their brain muscle and learn the hard way.
And what reinforced this for me is a few studies that happened early last year and then other studies that have reinforced that have come out this year.
One of them last year is one of my favorite ever, which really shows it.
The impact on AI is something that the researchers call cognitive deficit.
It's a study from MIT where they took 60 Boston area adults and gave them all SAT-style essays to write.
And they divided them into three groups.
One group, closed book.
They just had to write the essay.
The other second group could use Google search.
And the third group could use ChatGPT to write the essay.
And they monitored using functional MRI their brain activity as they were writing it.
And then they quizzed them about their essay an hour after they wrote it and a week after they wrote it.
And I think a few other times.
And it's exactly what you would imagine.
The people that wrote it, closed book, lots of brain activity.
They wrote.
And a week later, they're like, oh, here's exactly what I wrote.
The people that use ChatGPT, almost no brain activity.
And they couldn't even remember an hour after they'd written the essay what they'd written.
And isn't that slowing down on purpose?
That's the thing that helps organizations scale and remember.
Because as an individual can do it, if you're a junior and you actually write the code.
I remember this one, like Ruby on Rails came out.
Ruby on Rails accelerated how fast I could ship an app because it just did a bunch of the grunt work.
I'm like, oh, cool. I just put in a couple of things that are my app.
And it's on this file based routing and it all just works.
I don't remember how it works under the hood, but it made a bunch of decisions for me.
And I was moving fast.
But now I can't tell you how that worked back then.
And at an organizational level where if you have junior engineers, they do force you to slow down.
And a senior engineer has to mentor them and train them and bring them up to speed on this is what this outage feels like.
This is why that code won't scale beyond 10,000 users or something.
And slowing down on purpose to say, hey, actually, at an org level, we need to make sure that we're bringing everyone along.
It feels to me like a little bit like DevOps back in the day.
We're just like, hey, we want to cross pollinate and train everyone to make sure everyone kind of like ops people and dev people.
They know how to write code. They know how it runs.
They know why the decisions are being made.
And we want to become learning organizations.
And now we just want to be shipping organizations.
We're just like, oh, just move as fast as you can. We don't care.
But that cognitive disconnect of like, well, how does it run? What does it do?
No one cares anymore.
I also think a little bit of trauma goes a long way, right?
If you drop a table, you're never doing whatever you did before.
But how do we incentivize senior engineers or other people to invest in teaching?
You know what I mean? And I also think it depends on how you use AI.
Like there's like Claude thinking there's different ways that you can have it.
You can have it give you multiple options on how to solve something and then go pull the sources.
And you know what I mean? Like there's ways that you can get to a lot of the planning and requirements.
So you're still using your brain without going straight to like, go do this thing for me.
How do we like teach that?
So Scott and I proposed this program we call preceptorship.
Scott's wife is a nurse in the nursing profession.
They have a form of apprenticeship called preceptorship.
And there's nurses that are designated preceptors in the hospital.
Their job, one of their primary job functions is to bring up the junior nurses,
to coach them and guide them and observe them and help them out.
And so we believe that the way to get through this is to identify senior engineers that like to teach,
that feel like they get energized by that and they're good at it,
and designate them as preceptors and then assign early in career developers to them
where they're then being guided and coached.
And this doesn't mean 24 by 7, but it means, you know, probably at least one day of time a week
spent with the junior engineer over their shoulder watching them, coaching them,
and then also giving them tasks to go perform.
And I think what's ultimately going to happen, and on top of that to help this,
is AI-assisted teaching to the junior engineer, which is like you said,
instead of the junior engineer saying, here, go write a piece of code that does X, Y, and Z,
the AI instead of doing it off the bat is going to say, well, how would you architect that?
You give me the pseudocode and I will fill in the gaps kind of thing.
And I think that's going to be like so important because it also means that you can scale the learning too.
You know, you can do that at a much higher scale and it can be more accessible to people.
Does that work in nursing because of the life critical?
Like there's life's on the line, literally, so like we have to have,
it almost seems like there needs to be regulation around that.
Well, I think, I don't know about regulation, but I think what's going to happen is there's going to be
a transformation of the university company-corporate relationship when it comes to training
where incentives get aligned and there's joint teaching happening coming out of school into industry
where if you take a look at hospitals and medical, and this is reflected in other apprenticeship-type fields as well,
there's designated teaching sites.
And those teaching sites, what they'll do is that somebody that goes to the teaching sites
and goes through a program like a residency internship or whatever
gets some kind of certificate or diploma that is valuable on the market.
And they're also then incented to stay at that place that they learn through different incentive structures.
Is this just an AI bootcamp? I mean, like we had coding bootcamps to get people those certificates
and at some point they became not valuable to enterprises because they actually were worthless.
They weren't teaching people the things they needed to learn.
And I don't know if one exists today, but I'm sure there's going to be one that's like an AI will train you and give you a cert.
But we have to keep it valuable. We have to make sure that they actually know what they're doing at the end of the day.
Well, that's right. It can't be a cert. So bootcamp, I think, doesn't reflect the depths of what's required here,
which is a program that's probably two, three years in length of doing this kind of stuff.
30-day bootcamps is not enough.
As somebody who has a college degree, but also has an apprenticeship and how they got into tech,
I think we should have always been doing tech apprenticeships because it is more of a skill that you're learning.
And a lot of the ways that college teaches coding and being an engineer are not the same as what you do at work.
And it doesn't teach it. They don't teach a lot of the very important parts of being in the workforce.
And being able to actually work with other engineers and solutions architects and see how it all works
and what it's like in production in real life, I feel like that's such a valuable skill.
And it's wild because for a long time, I did an apprenticeship with AWS,
but there was funding through Washington, Virginia and all these other apprenticeship budgets.
And you get to go in and work and do the actual thing.
And I feel like I learned so much more from my apprenticeship than I did my college degree.
It's one of the things I love about a conference like SCALE.
A SCALE is volunteer run. It's completely volunteers. And look at the networking gear we have here.
This is literally enterprise grade temporary networking.
If you go down the hall to the NOC here, these people are experts in their field.
And they'll take on anyone to say, hey, I just need you to run cables this year.
And maybe next year I'll let you program the switch.
And the year after, you're going to do the Wi-Fi, whatever.
They're going to get that apprenticeship in a volunteer setting.
I feel like conferences going away and becoming a lot more commercial in that we're just going to hire the venue to do all this stuff.
That's why this is my favorite conference.
The hacker feel of this, the community feel of it, I don't get it many places.
I learned so much more here.
But I think that that democratizes the opportunity to get into tech.
And it ends up giving us the most curious people.
The people that aren't just from a certain background, but people that are genuinely interested.
Because when you have apprenticeship programs and it's not this high cost or fee to get into something.
And it's not just like the resources you have, but you're like curiosity and the work ethic that you will do in that apprenticeship.
It not only like democratizes the access to those things, but it brings in the diversity.
Like a lot of if you look at like Harvard Business Review, when it talks about like why diverse perspectives build better software and like having people from different backgrounds, apprenticeships feed that model.
They also like change the way that like you interact with the world, right?
Because you're bringing in these different people.
Like we had people with PhDs in our apprenticeship program.
We had people from all different backgrounds.
And who do you see around scale?
Like Casey has a theater major.
You have science or math.
Like, you know what I mean?
And some of the best engineers I've ever met have these different backgrounds and they bring that.
And they bring that problem solving, that curiosity, that knowledge.
And it's wonderful.
The curiosity I think is one of the is is the curiosity piece of my favorite part of A.I. where A.I. lets me follow my curiosity better than anything else I've had in the past.
It's cheap and fast to go in through several different ideas.
You know what I mean? You can do three different proof of concepts and see why one goes wrong.
But I think it's the nature in which you use it.
Like, are you using it to be curious or using it to be lazy?
Because we're all humans. That's human nature, right?
Yeah, well, and the incentive structure has to be there, right?
I mean, if you're incented to move fast, you're going to move fast and cut corners.
You hire a junior engineer and you say, here's a project for you.
Do it as fast as you can.
You know, we want this tomorrow.
That's a lot of the pressure.
This ticket is one T-shirt size. You do it now.
That's right.
That's a lot of the pressure, too.
I think everybody, some people are vying for jobs and the job market is really bad,
where people just want to do well.
And we're really over-indexed on efficiency and doing more with less.
And I think we're losing a lot of the teaching part, which is really interesting,
because do you think this now changes the skills that you need to be a good engineer?
I actually don't.
And actually, I went back to Carnegie Mellon, the school that I went to,
in the fall, and after I'd already thought about some of these ideas.
And of course, the day I spent on campus talking to faculty and staff and students,
the question was, what's happening to software engineering?
Is it going away?
What's happened to university?
People are talking about going right out of high school into the software industry,
skipping college altogether.
And the way that I look at it, again, going back to the experience that you've got with AI
and that I had, is you need to know what you're doing to make the most use of this tool.
And this bet on AI is going to be magical
and it's going to solve all the problems and work 100% of the time,
I think that's a foolish bet to make.
And I don't see a line of sight from where we are with AI and this current technology
to AI just gets it all right.
Again, these are frontier models that we've got and they're doing a lot of brilliant things,
but they also make a lot of mistakes.
They just don't have the big picture.
I think I meant more of like, do you think it's going to have more of the interpersonal skills?
Rock coding doesn't matter.
Because we all knew that coding is the easy part of doing that job 90% of the time.
And we always say that you need to get requirements and you need to be able to work together.
But I feel like a lot of times when it comes to hiring people, they people just want really technical people.
And I think that this almost democratizes the access to being able to learn if it's used in the right way
and if we're giving that access.
And I wonder if it makes you being a whole person and that curiosity
and that ability to work with others and that ability to experiment and be wrong
and just grow, like growth be more important.
So for one, I think it's always been the case that you need to be really successful.
You need to have interpersonal skills.
You need to have empathy and you need to be able to...
You're making me so excited.
I felt like I was so sad about tech and I was reading about you today and yesterday
and it's like, I'm genuinely really excited to have you as our CTO
because you're saying the things that I think we all think quietly,
but you are willing to put yourself out there and say the elephant in the room
and that makes me so excited.
You're just reading about you.
You enjoy solving problems and researching and writing, but writing professionally and technology.
You're a whole person and that gives me hope that we're going to go in the right direction
because I think we will get some sort of something from AI,
but I think the companies that will truly be successful are going to be the ones
that are looking at this whole pipeline issue and this issue as a whole
and how to use it as a process and truly make it a part of the process
and it's really exciting to see not a lot of people will say that out loud
and really put their money where their mouth is to figure that problem out
and I just think that is so rad.
Because the incentive's not there.
They don't reward.
But that takes bravery though.
You could totally just be like every other CTO and just be like, oh, whatever.
AI is going to solve all the problems.
So I just think, no, but that's true.
It's plenty of layoffs and doubling down.
It's plenty of people.
Exactly.
And I think for you to come out and say that and to do the work of that paper
and to truly think about those things is kind of like it's not happening.
People are not doing that right now, so I just think that just gave me a hope and excitement
that maybe we will all be okay.
Yeah.
I mean, to that, talking about the need for people to have interpersonal skills
to be successful, these companies are made up of humans that need to decide what to do.
It's a collective group.
Yes, you've got leaders at the top that are trying to get the company to move in some direction.
The most effective leaders get people energized about what they're doing
and you need to typically have interpersonal skills to actually get the most out of people.
Yeah, you can hire people and if they don't like you as a boss,
they're going to clock in, do some crap, clock out.
But the way you get the most out of people is if they feel like they're part of a good mission,
they're heading in the right direction, they're making a difference.
One of the things that I've heard other people saying, I think it's true
and I run into this all the time, is like technology would be easy if it wasn't for the people.
Do you buy into the single person company AI, a bunch of AI agents doing all the business stuff behind them?
I don't know if you listen to Shell Game podcast where he built a startup that was just him and a bunch of AI agents.
They literally would send themselves emails and Slack messages and plan stuff.
All of them had a role. They had a CEO and a finance person and a coding bot or whatever.
Then he just organized them and said, hey, we're going to do this thing.
He'd show up for meetings and then they would do stuff.
It feels so weird, but also I know people are rooting that on and that just doesn't feel right to me.
Given how off the rails AI can get, I don't know if you saw Tropics run a couple of simulations
where they have the AI try to run a vending machine company and they end up doing horribly.
One was giving people back money the first round. The second round is trying to scam people.
One tried to rat on the fake CEO and his affair that he was having, scrubbed through all of it,
and then tried to hold it over his head so he wouldn't turn them off.
One actually went after a maintainer and then tried to put a hit piece out on the maintainer that wouldn't accept its code.
Yeah, but, you know, and you say that we're also go off the rails.
That's true. But again, going back to, you know, how capable are these things going to get?
So one thing that ultimately they're in service of humans, right?
They're not in service of AI. And if they're in service of humans, they need to understand what the humans want.
And if you're like letting the AI run off on its own,
you can't guarantee that it's going to be doing what you ultimately want.
This is, you know, I'm watching it code and I'm like, no, that's not what I want. You're off the rails.
And that's the hardest part of engineering is making sure that you have the right requirements to build the right thing.
You know, so I feel like.
I'm still waiting for the article to drop where an AI implements its own like security encryption,
like the thing you never do in writing software.
I'm sure it's happening.
I'm sure it's happening. I'm absolutely positive it exists out there and we just haven't found it yet.
In which they've been dropping tables, deleting backups, and just completely going to nuclear.
Someone's going to write their own encryption.
AI is going to vibe code this encryption library and just like, actually, yeah, no, that was that was not encryption at all.
Someone looked at the Wireshark and it's not happened.
But yeah, I mean, I think on one hand you see the people like cost of software development dropping to zero.
First of all, I don't think it can drop to zero.
The more complex systems, the more physics get involved with the development of the software.
As you build this complexity, you need to have tests to ensure that the system is functioning as you want, like you want it.
And that when you add new functionality, it's not breaking things that were working.
Yeah, so like just running the test, that doesn't go to zero.
There's compute cycles that need to execute to make sure your tests pass.
Deploying, that doesn't go to zero.
So there, this cost of software development dropping to zero won't drop to zero.
It just cannot.
And there's also evaluation that doesn't drop to zero either.
Which is, again, like you've got to decide if what the thing produced is what you want.
And that's evaluation.
And you can't have AI be the sole evaluator.
You need to come and look at it yourself.
Now AI can help you evaluate, but still you need to apply your human judgment.
And the higher the stakes, the more certain you need to be that what you're developing is actually correct and meeting the goals.
Long story short, I don't see the single person running a very complex, scaled organization with AI bots really working.
I could be wrong.
But just like I don't see AI, you being able to give it a spec and it's off and makes something great that's sustainable and does everything you want.
Do you see the flip side happening?
Like the end of SaaS is...
Oh yeah, don't get me started about this one.
Scott and I had one of our episodes on our podcast about the end of apps and SaaS.
And because I just go off and he's like, okay, let's talk about that.
Because he likes pushing my buttons.
But I know, you know, just like this question, the example that I used, and I'm sure you've heard this one too, is like the, you know what?
Our HR, our expense reporting software at my company stocks.
And so what I did is I just vibe coded a thing where I just dropped my receipts into this folder.
It looks at my calendar.
It knows where I was and files my expense report automatically.
Look, I vibe coded that.
I don't even know what...
Didn't look at the code.
And look how awesome that is.
SaaS is dead.
And then that's their conclusion.
Where apps are dead.
Like I just...
I feel like they just jumped three different ships and I'm just like, oh.
And I'm going, wait, first of all, the...
I mean, there's so many ways to look at this.
One, let's say your company has a thousand employees.
You're saying that every employee is going to go vibe code expense reporting app.
Also, how much data did you just give to like an age?
Like that just...
The list goes on and on.
And everyone has its own set of bugs and security vulnerabilities and compliance problems.
The second thing is, so you...
And you've wasted tons of tokens.
And when you could just go and fix the HR app to do that,
the right what you want and do it in one place.
And now everybody gets the benefit.
That's what I was going to say.
Like it's the scale in which like if you go and you fix it for everyone,
it's so much better than trying to fix it in these individual places,
which is also the value of open source.
To me, like the nice thing is the personalization of the software I can write right now.
Where it's like, I want this tool to do this way because I have some other thing that I like.
But when you solve it for everyone, it makes it more complex
because you have more knobs that everyone has to deal with.
When I look at what happened, my first IDE was Excel.
I was writing Visual Basic in Excel and then Excel got way worse.
I'm like, wow, this is actually full blown programming.
Like I can do everything in this spreadsheet, which is crazy.
But like maybe Excel didn't need to be that complicated for what some people wanted to do.
Actually, all I wanted was Vim.
Then I could just do, you know, some...
Does that depend on what you want it to do, though?
Absolutely.
But like the AI making it more personal to be able to say like,
oh, actually, I just want this one thing to do this, do it this one way for me,
is easier to decentralize what people want to do.
Like I use Mac as my personal computer because I want it to link up to everything else
and do certain things for me without thinking about it.
But if I'm going to develop something, I'm going to use Linux
because I want to know exactly how it's going to react to that environment.
And you have your own development workflows.
That's what I'm saying.
You have your own plugins and your own things.
Look at Linux and how it's a religion.
Like some people want very particular things in it
and they want to have access to those things and they want all the buttons.
And then there's some people who are like, dude,
I just want to get in there, write something and get out.
Like I think if it brings you that value to do it yourself,
then you'll spend more time on those things.
But if you just need something to get it done...
And I guess that's where I've been drawing the line.
I won't vibe code anything that I'm on call for
and I won't vibe code anything that's going to compromise any data that's private.
And so those two things are like, okay, if it has private data
or if I have to be woken up for it, I'm not going to let AI touch it
or at least not finalize it.
But that's just where my personal lines are.
And I don't know how that scales to what everyone's doing
because it's like an HR app that's like, oh, is scanning a receipt a bad thing?
I don't know.
It's kind of funny, though, because you know how people talk about
the relationship between platform and DevOps versus software engineering.
Or when engineers get to just give their code to QA, you know what I mean?
Like that responsibility and ownership of something
because you said you wouldn't want to do it to something that you're on call for, right?
And I almost think that it shows that you have to feel the pain of your decisions.
Because if you can just transfer it to someone else to deal with it,
then you're going to just write whatever.
But having that ownership of it, knowing that you will get paged in the middle of the night
brings a different type of ownership.
Or if there's a compliance issue in your HR app
and you're going to be the one that gets fined for it.
But I just think that this apps or SaaS are dead.
There's huge value in centralization.
And if you take a look at SaaS, what it is,
it is centralization of a solution to business problems.
And not building the wheel over and over again,
which people love to do that sometimes.
And going back to the complexity thing,
like these SaaS apps, if you take a look at contact management or sales,
there's huge amounts of complexity there.
Like it looks on the surface like, oh, it's easy, right?
Just keep track of your customers and the database.
Also, if you want to open a bakery, you want to bake a cake.
You don't want to sit here and vibe code a SaaS app for your customer stuff.
Exactly.
You know what I mean?
That's the point of we're engineers.
We like doing that.
So yes, we want to build it for everything.
But my friend who has her own bakery, probably that's not how she finds it.
It is a very software engineer focus.
We build our own tools.
We like to build tools.
And for that, I mean, like I said, it's a huge accelerant.
I'm having more fun than I ever had coding.
We did my website and I was like, oh my god,
I actually might like front-end development now.
The amount of domains that I was name-squatting on domains
that actually have something better.
I'm actually using them now?
Exactly, yeah.
It's like, I haven't been $10 a year for this thing for how long now?
It stopped redirecting to my blue sky now.
Do you have anything specific that you're talking about here at scale for the keynote?
I am.
Actually, I love talking about this and the future of software engineering
and what we should be doing about it.
But at the keynote tomorrow, I'm going to be talking about open source security.
Because if you take a look at what's been happening over the last few decades
is the world going to dependency not just on computers,
but open source software in those computers, Microsoft itself.
Our dependencies on open source are enormous.
Much of our platform is built on top of open source.
Linux and we're running Linux in the heart of our infrastructure.
Our office is running on top of a platform called Cosmic,
which is running on top of Kubernetes and a lot of Linux.
And so the risks of insecure open source are rising.
And we've seen more and more attackers come after open source
and open source supply chain and try to –
that's their way into now the corporate world
is through vulnerabilities and exploits in open source.
Not just that, but AI, as we've been talking about,
is getting really, really good at finding vulnerabilities.
And so we have to take open source security very seriously.
And one of the things that I helped co-found
is the Open Source Security Foundation, which I'm currently chair of.
So it's going to be – these are the stakes now.
We all love open source. We're all building on open source.
But there's a lot of risk that we've got to address
in security and open source as an industry.
We can't just do it individually.
Microsoft can't solve this problem.
Even Microsoft, Google, and Amazon can't solve this problem.
We need to get the whole ecosystem to solve this problem.
And so that's what is the theme of it.
I mean, we can't solve that problem with just security signing and making shots.
Like, we have to – it is a people problem still, right?
Do you think that this incentivizes big companies like Microsoft and Amazon
and Google to be better stewards of open source
and to really invest in open source?
Oh, absolutely.
I mean, one of the things that the three of us have done
is fund something called Alpha Omega,
create an organization called Alpha Omega Foundation
inside the Open Source Security Foundation,
which is part of the Linux Foundation.
And we've contributed millions of dollars.
The goal of Alpha Omega is to go and directly apply funds
to help get security improved in major critics,
what we consider critical open source projects.
And it's proven to be very successful.
How does that practically happen?
It happens with very direct engagements with the projects,
like Airstream was one of them.
The Alpha Omega went in, met with them,
figured out where the money could be applied,
what resources they need, contracting,
to go and improve the open source security of it and its dependencies.
And so this is kind of a great model,
but it's only touching the tip of the iceberg
when it comes to this.
We really need to...
And there's something else that's happening, too,
and that's regulation.
I don't know if you've seen, like, the CRA in the EU,
which is focused on we need transparency
into what open source you're using and its security posture.
So...
And if you're not up to a certain level, you can't use it,
which is, like, going to hurt all of us
if we don't get the security up to the standards we need,
which we should just be doing by ourselves,
but regulation's going to force the issue.
And you think that specifically by having regulation that says
it has to meet a minimum bar of security,
you can't go use my open source project that doesn't do that.
And even if it looks fine and it tests fine and AI is like,
yeah, we publish all the code, it's good,
but I still need to have some basic assurances
of maybe even, like, my GitHub account has two-factor authentication.
Absolutely.
And those are, like, the bare minimums of this needs to be there
for even to test the binaries that come out of it.
It's interesting to see all the new tools like Scorecard
and, like, the different ways that people are trying to find ways
to kind of give that... build that trust
and what that... this repo is secure
and that people are thinking of those things.
It's really... like, just... you are what you measure, right?
So just having some measurement around it of, like, here's the number.
I don't care what it's based on, but...
We're trying to get the accurate measurement in.
Yeah, but just, like, number one to 100.
Even think about what we are measuring, you know what I mean?
Yeah, and if you just say, like, this repo is 80 secure.
I was like, I don't care what the number is, but you're just like,
oh, okay, does that meet my bar?
Like, I don't know, like, where is my threshold for...
Also just the fact that we're talking about it, you know?
Because sometimes people are just like, it's cool,
I'm going to fork it and use it because, you know?
As an open source maintainer, like,
I also feel the weight of the rest of the world vibe coding
to, like, throw things at me, right?
Like, the book Building in Public is, like,
the main thing that's hard about open source is the maintainer's time.
I mean, that's the only finite thing we have.
That's evaluation, by the way.
That goes back to the evaluation bottleneck, which won't go away.
And I think that it's just getting harder, like,
when somebody can vibe code 4,000 lines
and they don't have the same ownership over that, you know?
They don't have the context and they don't have that ownership.
They don't have that relationship with the painstaking time
that it took to do that.
And then people, like, you know, people say,
well, open source is how you get a job,
so people are just vibe coding stuff and sending it up.
I need my GitHub graph to show all the greens and then we're good.
Which is really hard because it's, like,
there are people that generally want to do good work in open source
and they can't get a chance.
And we've already had struggles of how we get new maintainers
and new contributors into open source,
and this is just going to create that next hurdle.
There's so much noise.
Well, this is tying directly back to the juniors
not really knowing what they're doing.
That's why it's so exciting because you actually think about these things.
Like, you are the first CTO I've ever seen talk about AI in an honest way,
which makes people want to use it
because you're not presenting it as this black box of magic.
You're like, you know, it's not perfect.
This is how you can use it.
So, like, I can buy into that, right?
I can't buy into the magic black box that, like, does all the things
and never does anything wrong.
What kills me, I'm sure you both have heard this,
is the flex of I don't even look at the code.
Yeah.
You know, like, that's not the flex.
Yeah.
That's not exactly what this podcast is for.
Just go around and find out.
Like, have fun.
And what you said about the GitHub squares is true.
Like, how many dudes do you see on the internet?
And they're like, look at my GitHub.
It has a million, like, green squares.
And I'm like, but did you actually do anything that was helpful
or that was, like, you know what I mean?
Like, I can write a bunch of lines of anything.
Well, this is the metric that we've traditionally used
to show developer productivity is commits.
Yeah.
And actually, you know, it's like, oh, this developer has a lot of commits.
This one has, which, by the way, I mean, you can look at it and go,
that's not really, you know, one commit is like I fixed a comment.
The other one is like I checked in a whole piece of functionality.
Yes.
But now AI is, like, commit, commit.
After every little step it takes, it's committing and pushing.
And so if you take a look at AI commits, like, it's dwarfing everybody else
and like, see, look at how productive we are now with the AI.
And it's kind of the code and all that.
Man, this is probably like 2012.
I found out that, like, the GitHub graph,
if you could just, it's just looking at a date in any repo
and you can fake all the dates.
And so there's generators that'll, like, write your own.
So I literally, when I was looking for a job,
I put for hire as, like, my GitHub graph,
set that in ASCII art basically across the,
so I just had to rewrite that every day.
I was like, oh, yeah, four hires.
So, like, maybe someone will see it and give me a job.
And, yeah, it's just a game that, you know, any metric can be gamed.
And so how do we make sure that the, like,
it can be maintained in some way that is reasonable?
Like, is this metric good or not, or is it effective at?
I think that's, like, with everything, like DevRel and everything,
people are saying that DevRel is dead,
but it's, like, all the things that are hard to measure, you know?
Like, I think it's funny, like, people will do anything for gamification.
Like, I got the Fable app and it, like, sees how many books you read
and it tells you, like, you're going to, like, not be on your streak.
Like, at Amazon, we gamified the pipelines
and I've never seen so many green pipelines in my life.
Like, people were, like, just sitting there like they were going to get it,
like, hours later, and it's just amazing.
Like, it's kind of, like, what I, like, was talking about today,
like, human psychology and, like, engineering are so coupled
in ways that we don't think they are
and, like, ignoring the human aspect is, like, so disastrous
and we do it all the time.
I mean, the open source, like, I have to think of any major hack
at, like, a lot of places in the news, it all starts with, like, social engineering, right?
It's like the go to the weakest leak of, like,
oh, that person is just, like, at a bar and they're lonely
and you can probably get the key card or password or whatever.
I feel like I go to, like, all these talks and then there's always some dude who's like,
but what about the technology?
And I'm like, yes, but we've always tried to solve that
and you can't write a good technical solution
without thinking about your users and what state they're using it in.
And maintainers, as maintainers are getting more burned out,
they're like, hey, I would love to help.
Like, the XZ vulnerability from last year.
And that was a long game.
It was a long game, right?
You know what I mean?
They played that well.
It was dedicated.
Like, and just the way that they wrote the binaries,
like, it was not only, like, a wonderful technical Trojan horse,
but they went and found a project that they knew was used.
They found one that had one single maintainer.
They committed actual help for a while.
You know what I mean?
It's very, and I think that we are in just for, like,
a world of interesting cybersecurity in the future.
I think so in the very near future.
Yeah.
It's not going to get easier.
I know that much.
It's so interesting.
I don't know.
A couple of things, Anthropic, again.
They did, they had Claude look at a bunch of open source projects,
found 500 vaults in just a few hours with a few thousand dollars.
They went and worked with Mozilla and found a bunch of CVs in Firefox.
It was one of them.
It's written in Rust.
It's one of the most hardened, secure systems in the world.
And Claude went and found, like, 12 new CVs in it.
So, yeah, the deluge is coming.
And that's one of the things I'll talk about tomorrow.
And coming all the way back to your book, like, Zero Day,
like, not having a manual way to get around the thing,
by relying on this so much and then not knowing how it works
or how to turn it off or how to work around it,
where do we end up in the next even five years?
I don't know.
And I don't know if there's a way to stop it.
I mean, you hear the stories about the law office
and the computer goes down.
Well, they're dead, right?
Because everything is on the computers.
Already we've reached that point of,
and we talked about some examples earlier,
we've reached the point where a computer system goes down
and that's it.
By the way, just for me, I can't code anymore without AI.
I cannot.
You're at the point where it's like,
I don't remember how to do these parts.
It's just a total waste of my time now.
I'm just not going to do it.
I'd rather wait a few hours for the system to come back
than waste my time typing in code
and doing the hard work that I used to do because I had to.
And actually a lot of the, like, you know, give it that,
the Wordle stats extension, I couldn't do that one.
Like, because I don't know React and Node.
And like, if the AI is not there to help me, I'm dead.
And even then, I'm like, you don't know the syntax,
but you know what you want to do.
And like, I built this backpack over here for scale
that runs a Kubernetes cluster
and people keep asking, how long did that take you to build?
And I'm like, you know what?
It probably took me about 10 years
because I've been learning every skill.
I was like, the actual like putting it together
is a few hours.
It really didn't take, like I vibe coded the app in it.
I plugged in some things, but I knew what it was capable of.
And I knew where I wanted to go with it.
And so over the last 10 years, I've learned the skills
to say like, oh yeah, I can finally implement the thing
that I wanted to do because I know the limitations.
How long do you think it's going to take people
to catch up to the thought process
that you're having with the pipeline
and how we're not just going to have like AI build everything
because we're literally watching it happen in real life
where they are just absolutely laying off
like tens of thousands of people saying
that AI is going to do everything.
Like what do you think is the line in which people are like,
okay, maybe we need to start being more realistic, you know?
Like we've seen tons of outages.
We've seen how it's gone.
Like what do you think is going to be the like reckoning,
I guess, or for us to start like actually creating processes
in which to like have humans and juniors and AI work together?
So the worst case is that it becomes a crisis
that wakes everybody up to try to solve it together
because it can't be solved by Microsoft by itself.
And that crisis is crap.
Our seniors are retiring and we have nobody that understands
how to run our system or evolve it.
And AI can't do it by itself.
And the juniors with AI can't do it even if we hired them.
We've lost so much tribal knowledge that's not being passed down.
Exactly. So that's the worst case scenario.
And, you know, me and Scott starting this
and we're talking to a bunch of schools
about what it would take to scale a program
like the preceptorship program across a bunch of companies and universities.
How do you do that in actual corporate America today, though?
Like how do we change the engineers
that are actually like working at these big companies?
Like the junior engineers, the mid engineers, senior engineers,
how do we change the way that they're working so they're still learning?
Because, I mean, even when you get into a company, right,
like there's still a lot when you're one year into engineering
versus three years or five years or 10 years.
There's so many different milestones and kind of like different levels.
Like what are we doing in corporate America like right now, you know?
Well, at Microsoft, we've got a pilot of this thing going on in Azure.
It's a small scale pilot because we're trying to just learn.
But I think every company that wants to participate in growing juniors
has to, I think, is going to have to have an apprenticeship type model.
But I'll tell you the other thing, too.
There will be a lot of places where they won't do that
because, A, they don't have the skills.
They don't have the scale.
They don't have the incentives to do it.
So I think this is why structurally society needs to support it.
And if you talk about, I think you were talking about tax credits
or incentive government money to support apprenticeship model,
that's going to have to happen, I think, to really support this.
And it's going to be, you know, at the government,
that's basically society saying we're going to have a big problem
unless we try to get the incentives.
At the end of the day, people have to tell the government this is important.
I mean, think of like other industries that had these problems during COVID,
like the nursing shortage and teachers.
Everyone all of a sudden said, like, pay teachers a million dollars.
My kids are home with me all day now.
Well, teachers should make more money.
As a society, we hit this point of like this is an emergency
and we need to incentivize it better.
That's why I think it's interesting that you and Scott who are very, you know,
like you have a lot of influence, you know,
like are saying and coming out and saying it
because there's so many people who aren't saying it.
They are just completely pretending like it doesn't exist.
So I think that's such a competitive edge that it brings to Microsoft
because if you are the only big company,
I don't know if it is, but I'm saying like theoretically,
that is actually thinking about that pipeline issue.
That puts you miles ahead by the time everybody else figures it out in a crisis.
It is some gracious thinking long term, right?
Like this is just like, hey, Microsoft isn't here for a five year.
We're just going to burn a bunch of cash and be gone, right?
Like Microsoft is like, actually, what does it look like in 20 years?
Yeah, I think that's important though.
Like we're talking about the strategy of like technology
and people are like AI is going to fix everything.
And I think it's going to be interesting when the winner is the person
that thought about the human and like we're like as engineers,
like we come up with names for everything, Agile, Scrum, like DevOps,
but nobody's thinking about how we take this wonderful new tool,
but make it like a process and how we use that process
and how and I'm just like, we use processes for everything.
This is what we're not going to do it for.
What is the marathon of AI look like?
Like it's not just the next six months.
Do you have any closing thoughts on the security
or the AI aspects of where we might be even at the end of 2026?
Like this is all moving so fast.
Well, I think a few things.
On the AI threats with security and on the AI impact on hiring,
the world is starting to recognize it, both of those.
I think these studies from these reports from Anthropic
and what we're going to see is threat actors finding bones in software
at a rate that's going to continue to accelerate.
And we're already starting to wake up to the fact that that's been happening.
As far as the impact on juniors in the talent pipeline,
you're already started seeing this a little over the last few years,
and that seems to be accelerating as well,
with companies announcing that they're not hiring juniors
or job trending data showing that juniors,
especially in software engineering, aren't being hired at a school,
but senior hiring remains strong, has even grown in a lot of places.
So the effects are becoming visible,
and I think by the end of the year, there are going to be a lot more people.
Scott and I started talking about this the early fall of last year.
We were one of the only voices,
but now you're hearing more and more people starting to say things are going to happen.
So I think by the end of 2026, it'll be recognized as a problem by the mainstream.
Sure. Yeah. Makes sense.
How did you find time to write novels and technical books?
And what interested you into writing novels?
It's about technical things, but it's a legitimate novel, and more than one.
Yeah. Weekends, evenings, breaks, the kind of stuff before work,
kind of what you would expect.
Whether it's writing novels or now I'm AI-assisted coding,
I won't call it vibe coding because I don't want to cheapen what I'm doing.
But that's what I've always done my whole career,
is find time to do these other things while I'm doing my mainstream, my mainline responsibilities.
I know we don't have a lot of time, but could you kind of tell us a little bit about your career?
Because I think it's really cool that you just seem to genuinely want to figure things out,
and that you still stay technical in coding, even at this level.
Yeah. So I got into computers in eighth grade when a friend of mine,
their dad brought home an Apple II from this university they worked at.
And he said, hey, do you want to go play on it?
And that night, that was it for me.
I'm like, this is my future.
Got the plug.
Yeah. And then I wanted to learn as much as I could about the internals of computers.
So I went to school and got a degree in computer engineering,
got a Ph.D. in computer engineering because I just wanted to go as deep as I could with learning,
went out, worked for a few companies, got into Windows in my Ph.D. program and postdoc,
and started to find problems with Windows and pissed off Microsoft,
and then started a software company selling freeware software, this disinternals stuff,
teaching Windows internals, writing the Windows internals books as part of that.
And then he actually started writing Zero Day back in the early 2000s,
and then got acquired by Microsoft in 2006 to go into Windows, work in Windows.
So I worked on basically the Windows 7 was the release that I was part of
and kind of setting Windows up for the future.
As like an engineer?
Yeah, as an architect in the core operating system team.
In 2010, saw the writing on the wall about Cloud
and realized a huge opportunity there to have a big impact and joined Azure,
which at the time was just a couple hundred, a few hundred people at Microsoft
and nobody, you know, basically the industry didn't know what Cloud was.
The Windows team was like, whatever.
And then that's where I've been since in Azure since 2010.
Well, thank you so much for coming on the show. It was great talking to you.
And we will put the link to your live stream from Scale for the keynotes in the show notes.
So when this episode comes out, people can check it out there.
We should put a link for the paper too.
Yes, we will put a link for the paper and also put a link for Zero Day in the books.
So, yeah. So thank you so much. And we will talk to everyone soon.
All right. Thanks, Justin. Thanks, Hanon.














