I recently wrote about the now infamous and largely eye-rolled MIT study. You'll remember it's sensational headline that 95% of organizations are getting a zero percent return on their genAI investments. Yikes. I won't rehash that article here, but I will offer an alternative that didn't get quite so much press coverage, and that is a three-year longitudinal study published annually by the Wharton school that had a very different conclusion.
Author's note - I have created a publicly available Google Drive for all the interesting things I find that seem useful for sharing. You should be able to access it, let me know if you cannot.
Wharton's slightly more scientific study found that of 800 leaders surveyed, 75% report positive ROI from their genAI investments. Hmm.
One of the headlines of the Wharton study was that "most firms now measure ROI, and roughly three of four already see positive returns". Darn-it, the ROI problem continues to be devilishly confounding. This genAI business continues to be such an emotional roller-coaster. So let's dig in, and then I give you my $0.02 on what it means to us in mortgage.
Methodology
This one was slightly more scientific, it was based on 15 minute online "quantitative" surveys of 800 leaders, and the study has been repeated for each year beginning in 2023, which I have to say was very forward looking of Wharton. I put quantitative in quotations, because while I am sure the data they gathered was numerical, the source of the information was from people describing their experience and, therefore, qualitative.
Definitely bigger than the MIT study, and assessed over a longer time horizon. Still qualitative and rich with opportunities for bias.
Main Observation #1: GenAI Usage is Now Mainstream
There was a lot to this first observation that was very validating. I haven't seen a good study on adoption since the Deming study from late 2024 (there is still no better one, unfortunately). Yes, genAI is mainstream. Yes, 46% of leaders are using it everyday and 80% are using it at least weekly. Still, of note, one in six executives surveyed were not using it (I commend them for their honestly). Those executives and companies are in for a cold awakening if they don't join us on the AI train.
Very interesting to note that "practical, repeatable use cases supporting employee productivity" see the most adoption, with IT, legal, and HR being the furthest ahead. I continue to believe that adoption is good, employee productivity is a great place to start, and also the truly differentiating companies will be making their inroads much deeper into operations. Unsurprisingly, the study finds that operations as a business area is among the furthest behind. Well yeah, it's the hardest.
Main Observation #2: 75% Respondents Reported Postive ROI
One thing holding us all back, that the study found as a positive, is the idea that "accountability is now the lens". Yes - we need to find the value for sure, and also we are still early. Part of the value is in the learning, and the struggle. We can easily get stuck in a spiral of ROI purgatory when we load up a small set of narrow use cases with the full spend of getting started.
Budgets are moving from "one-off pilots to performance justified investments" and budgets are being moved from existing cost centers to genAI adoption. Again, yes, we need to justify investment, but I'm telling you it should just be one lens, not the sole picture.
One of the headline conclusions here is "budget discipline + ROI rigor are becoming the operating model for genAI investment". This to me is a sign that bean counters (absolutely not judging the bean counters, love the bean counters) are winning the executive perspective.
Main Observation #3: Culture is the Adoption Throttle, Not Technology
This was the most interesting section of the study for me, by far. "People set the pace"... I love this, and I could not agree more strongly that this is true. The gap between how we are using genAI (the "everyday AI" that is mainstream) and what it can actually do is a massive chasm.
We are seeing that almost 70% of leaders reported they have some kind of a Chief AI Officer role, indicating that accountability for AI adoption has moved into the C-suite. At the same time that confidence in AI and it's ability to provide value grows, capability is falling short.
"Capability building is falling short of ambition. Despite nearly half of organizations reporting technical skill gaps, investment in training has softened, and confidence in training as the primary path to fluency is down. Some firms are pivoting to hiring new talent, yet recruiting advanced genAI skills remains a top challenge."
Unless you have millions of dollars, recruiting advanced genAI skills is absolutely impossible, especially in mortgage. The only way to get it is to home grow it, or partner for it. And when partnering is chosen, it needs to come with a coaching and "teaching by doing" component. I continue to believe that every organization needs to have in house genAI talent, and that means investing in general education, applied education, education that favors application over attendance. Oh, and learning and applying new ways of thinking.
"Cartoon style scene of several friendly bear cubs swirling playfully around glowing streams of futuristic technology - floating holograms, data ribbons, and soft light particles..." Created on Midjourney
This is where we see that "people set the pace". In the past few weeks at PhoenixTeam, we have been up to our eyeballs in Phoenix Burst product adoption and workforce enablement, both separately and together. At the heart of adoption is the people. Literally. There is no adoption without the people. I can be guilty of this myself, we focus so much and so hard on the tech and the product and the experience, and we lose sight of the hearts of the people. The fear. The lack of trust. The domain of "change management".
The study echoes "the human side remains the bottleneck and a key potential accelerant. Morale, change management, and cross functional coordination remain persistent barriers. Without deliberate role design, coaching, and time to practice, 43% of leaders warn of skill atrophy, even as 89% believe genIA tools augment work.
What does this mean for us in mortgage?
I had a great friend say to me last week about adoption struggles, "perhaps it's FOAK?", which I had to look up. (Embarrassed to admit it but here there it is, authenticity is the best route to achieve authentic human connection).
Thank you chatGPT for once again helping me learn and put words to things I knew but only felt.
And yes, tying back to the study, for those of us that are working on really challenging areas, where the metrics are not well established, where the tasks are not necessarily repeatable, where the path ahead has to be redesigned -- we are running into FOAK. Taking a step back to think about the people, and learning from what they are telling us is really important. And forging ahead, being adaptable, being resilient, and staying positive.
In no particular order, what I think all this means to us in mortgage:
When we invest in applied learning, the people adapt better. I suggest that mortgage leaders look at their talent strategies, really think about what it means to the people, and create tailored learning strategies based on real change and thoughtful consideration about what is actually changing about actual jobs.
Remember, whatever feedback you are getting, it's valid. Even if you don't agree with it. That is an opportunity to open our minds to what someone else is thinking, doing, and feeling. Their experience is their experience. I suggest that mortgage leaders inspect their own feelings, inspect how they are using genAI (if at all), and then talk to others at all levels about their experience. It's probably won't be the same.
If you want fast adoption and fast ROI, go with the easy stuff. Just know that the easy stuff is commoditizing faster than you know, and differentiating on fast and easy isn't a thing.
There is so much happening out there, it's very hard to keep up. I hope you will reach out and share what you are learning and applying, what's working and what's not. We are listening and trying to find solutions.
By Tela Gallagher Mathias, Chief Nerd and Mad Scientist at PhoenixTeam
What does the infamous "MIT study" really mean to us in mortgage?
Everyone is hating on the the MIT study published in July, which claimed that 95% of organizations are getting a zero percent return on their genAI investments. This report, published by the MIT Media Lab, has been extensively debated by both critics and advocates, including some of the most recognized and respected voices on the AI circuit.
The 2.27% current impact represents the portion of total possible value that organizations are realizing today from agentic AI. (iceberg.mit.edu)
"Despite $30-$40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return.... Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L Impact."
That's a pretty spectacular claim. I certainly agree that finding the return on investment (ROI) is harder than expected, and I have seen teams swirl looking for that spectacular 2-4x ROI on one or just a handful of use cases. I think this study also ignites fear in all of us product companies looking to really make a difference in mortgage. We don't really want to talk about how hard it is to find meaningful and lasting change. So let's just put that out in the open.
What does the article actually say?
The main point is to argue that the key differentiator between success and failure are systems that learn. It argues that the classic ChatGPT model of assistive or conversational AI is great for short thinking tasks, and falls apart for long thinking due to lack of memory. It argues that agents are necessary to achieve real organizational value, and that there is a window of about 18 months to settle on partnerships that will help organizations really capitalize on the AI advantage.
I don't think these conclusions are wrong. In fact, I agree. However, I think they are, at best, weakly supported by a sparse set of anecdotal data in a study that has an agenda.
So basically it's a study to put data behind the claim that agents are the key to real value unlock, and that the time is now to seize the advantage. That's the bottom line, and I think it's useful. Yes, there are many reasons to hate on the study, but the bottom line strikes me as mostly valid.
Better than a bunch of hallway conversations?
The study was based on 52 interviews across "enterprise stakeholders", a "systematic analysis" of about 300 public AI initiatives, and surveys with 152 leaders. Not a super big or scientific study from my perspective. But still, let's put away the pitchforks. It's better than nothing, right? I think some of the best insights are revealed in the quotes.
"The hype on linked in says everything has changed, but in our operations, nothing fundamental has shifted." Little bit of victim mentality here but ok, yes there is a lot of hype and the PowerPoints do not agree with what is actually happening.
"If I buy a tool to help my team work faster, how do I quantify the impact? How do I justify it to my CEO when it won't directly move revenue or decrease measurable costs?". Preach - this is like THE problem. We only count two types of beans in mortgage - headcount and revenue. One has to go down and the other has to go up. otherwise we have no ROI.
"[ChatGPT is] excellent for brainstorming and first drafts, but it doesn't retain knowledge of client preferences or learn from previous edits. It repeats the same mistakes and requires extensive context input for each session. For high stakes work, I need a system that accumulates knowledge and improves over time." Yes and no on this one. The more I use ChatGPT, the better it performs relative to what I want it to do. It does anticipate what I will ask, and I have to provide less context. But yes, on an individual question basis, memory is an issue.
"I can't risk client data mixing with someone else's model, even if the vendor says it's fine". This is completely true and I hear it all the time.
A high bar defining success.
The report had a pretty high bar for the definition of success. Said in my words, success is defined as meaningful impact on the P&L, measured six months post deployment. Keep in mind, this wasn't actually measured, this was based on what those interviewed or surveyed said.
I've been a large scale commercial software product manager for a lot of my career. I've had many glorious successes and just as many spectacular failures. By this definition, I'm sure at least some of my successes would be failures. And if you consider what it takes to move in federal, I think success would be even more scarce. This definition applies to a narrow spectrum of small, turnkey, commercial solutions where you can turn it on and see immediate P&L impact.
While this is definitely the goal for all of us, I'm just not sure it's a realistic definition for the rest of the world. Or maybe I'm the one with the outdated perspective (ok, ok, probably it's a me problem and I am being defensive). I do base a lot of my experience on what the process has been like in the past. I certainly agree that in a world where we can go concept to cash in a week, we should be able to move the needle on the P&L in a matter of months.
Learning systems and the agentic web.
The authors are from the Networked AI Agents in Decentralized Architecture (NANDA) team at MIT. NANDA is a research initiative focused on how agentic, networked AI systems will impact organizational performance. They conduct research and host events that explore the future of what they call the agentic web, defined as "billions of specialized AI agents collaborating across a decentralized architecture".
Agentic AI according to NANDA researchers is the class of systems that embeds persistent memory and iterative learning by design, directly addressing what they see as the learning gap in assistive AI solutions like ChatGPT and wrapper based AI solutions.
That is also a high bar, in terms of the definition of an AI agent. In my classes and workshops, I typically define an AI agent as having four key characteristics, the ability to:
Perceive, understand, and remember context.
Reason about a problem.
Plan and take action.
Use tools.
I adapted this definition from Jensen at NVIDIA GTC earlier this year, so admittedly maybe it's time for me to evolve my definition, it has been about four months or so. I like my definition because its easy to communicate and remember, and it is easy to contrast from assistive or conversational AI. But just because it's easy, doesn't make it right. NANDA has a much more complicated perspective, resting on a foundation of what they call decentralized AI.
Decentralized AI enables collaboration amongst individuals and organizations that have complimentary assets without a central oversight function. The idea is sharing to achieve value rather than relying on central functions (or monopolistic vendors).
This idea of an agentic web resting upon a network of decentralized AI systems is complicated, and requires a level of technical sophistication that I really don't have. But I get the concept, and it makes theoretical sense. It just seems... really hard. It requires a lot of humans (?) to do a lot of sophisticated things around the world. Meanwhile in mortgage we are still just trying to figure out agents beyond the call center, research functions, and development acceleration (where agents are well established).
The five myths about genAI in the enterprise.
This I did find useful. It was a little section that did a good job painting the picture of common myths in genAI, some of which I agreed with, and I did stop and think about all of them.
Myth #1: AI will replace most jobs in the next few years. Yeah, no. Certainly across all major technological disruptions in the history of disruption, jobs become obsolete and new jobs were created. We are seeing fewer jobs for entry level team members. The Stanford Digital Economy Lab, using ADP employment data, found that entry-level hiring in “AI exposed jobs” has dropped 13% since large language models started proliferating.
Myth #2: Generative AI is transforming business. The study suggests that adoption is high but transformation is rare. I can echo this sentiment, this is what I see as well. I see very little truly transformational adoption in our industry.
Myth #3: Enterprises are slow in adopting new tech. The study indicates this is a myth, but then goes on to say "enterprises are extremely eager to adopt AI and 90% have seriously explored buying an AI solution". Exploring and adopting are just not the same thing, so I disagreed here.
Myth #4: The biggest thing holding back AI is model quality, legal, data, and risk. They argue as I've pointed out already that it's the lack of system learning that is the biggest barrier. This may be true, but model quality is a real problem, and it's what I hear about the most. the question is get the most often is "how do you know it's right" (followed closely by "is it safe"), so I don't 100% agree with the authors sentiment on this one.
Myth #5: The best enterprises are building their own tools. They state that "internal builds fail twice as often". This one is hard for me to substantiate as I tend to work with organizations that buy and build, perhaps with a slight lean towards the buy side. Naturally, this means my perspective will be skewed. I did find this fascinating, though, and in theory it makes sense. I'll have to dig into this one more and see.
Bottom line for us mere mortals in mortgage.
So cutting through all the jargon and the NANDA rabbit holes I explored through my study of the study, here's what I take away from all this for us in mortgage AI.
We need to redefine, or at least expand, our definition of an agent and think more thoughtfully about agentic AI. I continue to believe we have to start where we are, and agentic AI adoption is still very, very early for us in mortgage. I continue to believe that starting with good CI/CD pipelines for retrieval augmented generation (RAG) is the right foundation to start with for organizations that want to build. I don't generally advise organizations to skip steps, but maybe I should.
We need to be even more aggressive about agentic AI adoption and seeking out high-value, low complexity agentic use cases. I am already doing this, but not with enough vigor so I will place more emphasis on this.
We are still really early. The authors argue there is an 18-month window of opportunity. This is likely true in other industries but based on what I am seeing, our window is a bit longer, say 24 to 48 months. Longer in federal, of course. But it's coming.
And finally, perhaps most importantly, we have to continue to double-down on adaptive systems, and continue to incorporate what we learn and see from the actual operations into everything we build. This applies to our product strategy, our educational strategies, and workforce transformation considerations as well. This will take more thought and introspection, I'll let you know what I land on.
By Tela Mathias, Chief Nerd and Mad Scientist, PhoenixTeam
Blue Phoenix Awarded $215 Million VA Loan Guaranty DevSecOps Contract | A PhoenixTeam and Blue Bay Mentor-Protégé Joint Venture
ARLINGTON, VA, UNITED STATES, October 3, 2025 -- Blue Phoenix Solutions, LLC (Blue Phoenix), a Service-Disabled Veteran-Owned Small Business (SDVOSB), announced it has been awarded a $215 million contract by the U.S. Department of Veterans Affairs (VA) Loan Guaranty Service (LGY) to modernize and operate critical home loan technology systems that serve millions of Veterans and their families.
Formed under an SBA-approved mentor-protégé joint venture between PhoenixTeam (mentor) and Blue Bay Delivery Solutions (protégé), Blue Phoenix combines deep mortgage technology expertise with a mission-driven commitment to “serve those who serve.” This award underscores the strength of our partnership with VA and our shared commitment to innovation and modernization.
Under the new award, Blue Phoenix and its partners will: deliver secure, modern, API-first solutions built on AWS and Salesforce; enhance operational efficiency with automation and continuous DevSecOps practices; improve the Veteran home loan experience by streamlining processes for lenders, servicers, and program participants; and support adoption of AI-ready data infrastructure that positions VA for the future of digital transformation.
“This contract represents both a milestone and a mission,” said John Trodden, Managing Partner of Blue Phoenix and CEO of Blue Bay. “Today we take that commitment to the next level. As a Service-Disabled Veteran-Owned business, our purpose is personal: delivering technology that ensures Veterans receive the benefits they’ve earned, with dignity, speed, and trust.”
Tanya Brennan, CEO of PhoenixTeam, added: “This award reflects the strength of our mentor-protégé partnership and the trust VA has placed in Blue Phoenix. From day one, our focus has been on serving Veterans and driving modernization that lasts. Together with our partners, we’re proud to help VA accelerate delivery, improve efficiency, and realize the full vision of a modern LGY platform.”
Blue Phoenix leads this effort by bringing together deep expertise in federal mortgage systems, Salesforce, AWS, cybersecurity, and human-centered design. The result is a contract award that will improve outcomes for Veterans, advance modernization across VA, and set a new benchmark for mortgage technology transformation.
About Blue Phoenix
Blue Phoenix Solutions, LLC (Blue Phoenix) is a Service-Disabled Veteran-Owned Small Business (SDVOSB) headquartered in Arlington, VA. Formed under an approved mentor protégé joint venture between Phoenix Oversight Group, LLC (PhoenixTeam), mentor, and Blue Bay Delivery Solutions, LLC (Blue Bay), protégé, Blue Phoenix is on a mission to “serve those who serve” by bringing PMO, product and delivery excellence, and strategic advisory leadership for federal agencies. For more information visit, www.bluphx.com.
About PhoenixTeam
PhoenixTeam is a woman-owned technology services firm headquartered in Arlington, Virginia, specializing in AI-powered mortgage operations and technology services for the mortgage and financial services industries and federal housing agencies. Our mission is to enable affordable and accessible homeownership through innovative, customer-centric technology. With a strong focus on generative AI, we tackle complex industry challenges, equipping businesses with cutting-edge tools that enhance innovation, efficiency, and compliance. By bridging the gap between technology and business teams, we strive to bring joy and purpose back to software development, making a meaningful impact in the lives of our clients and homeowners everywhere. For more information, please visit www.phoenixoutcomes.com.
About Blue Bay
Blue Bay Solutions is a Service-Disabled Veteran-Owned Small Business (SDVOSB) founded by retired Marine Corps officer John Trodden. The company bridges gaps between business leaders, benefits partners, and engineering teams to improve veteran benefits delivery. With a leadership approach rooted in military principles and a focus on program management, benefits delivery, and product discovery, Blue Bay helps federal agencies and partners drive measurable outcomes while creating opportunities for veterans transitioning to civilian careers. For more information, please visit www.bluebayvalue.com.
Ten Not Very Easy Steps to Achieve AI Workforce Transformation
Workforce transformation is no joke. It's one of those things that as a consultant we over simplify with slides showing the Kotter change management framework. Throw in some change champions and some short term wins and voila - managed change. Bless our hearts, how naive we are. This is the story of what it actually takes to transform a workforce. I had the great privilege of running a workshop yesterday on this subject, which went really well, so I thought I would share it with the mortgage AI community in hopes of giving you a way to think about this particular type of change from the lens of lived experience.
I am not knocking Kotter, I promise. I just find that the unique nature of AI change requires a blended approach.
By the way, I'm absolutely not knocking Kotter, Kotter is awesome. I use Kotter and the framework is quite useful. But I find that the Kubler-Ross framework is much more helpful in our particular context, it so aptly comes from the human side of the change process. As I reflect on my own experiences with profound loss, I'm struck by the internal versus external lens of loss and adjustment from each of these perspectives. Kotter comes at change from the outside - a management perspective. Kubler-Ross comes as change from the inside - a personal. We can do both at the same time.
Step One: The magical moment of realization (AKA "oh shit")
December 2023, PhoenixTeam quarterly partner retreat, Tela at the front of the living room going on like a crazy person about how "there has to be a better way". It had clicked for me on a client project. I was part of a team working to do a ground up rebuild of a servicing platform. My team and I were transforming servicing guidelines into software requirements. So how do we do that? Well, you put the Fannie Mae guide on one screen, and your excel spreadhseet on the other one, and you copy, paste, interpret. Do this about, oh 150,000 times across Fannie Mae, Freddie Mac, VA, USDA, FHA, and then you can start the real work.
I had tinkered just enough with ChatGPT to experience what I call "the magic moment". This is the moment when we realize what genAI can actually do for the first time. The moment we actually understand why the world is losing their mind, and what all the fuss is about with the urgent need to get an AI strategy. This is the first step, and it really can't be skipped.
If you are looking for a place to start, this is it. The first thing to do is find a way to get your leadership team to have the magic moment. It's actually pretty easy, the right mix of foundational education, combined with the right demo, and the moment will happen. It can be done in as little as, say 45 minutes. This is why I do free AI speed learning. I want to help the industry get to that moment as quickly as possible because that allows us to move to step two.
Step Two: Existential dread
Once I had the magic moment, then came the existential dread. I quickly realized that unless we did something radically different, we simply weren't going to have a business in three years time - at least not a wildly successful one. There will be a long tail on genAI impact, a very long tail. In fact, that tail will likely be longer than my remaining lifespan. But the really differentiated companies, the companies that will thrive early, will get to the change early. They will make the genAI pivot faster than everyone else.
I tried to get a photorealistic image from gemini but it just wouldn't make the comet hit the earth. It kept showing the fire and smoke BEFORE the comet hit. In this version, the comet isn't even on the right trajectory. GenAI is so frustrating.
So this is the phase where we start to understand how much everything is going to change, and we get really scared about how we fit in. How our business fits in. How we are going to be able to provide for our families and the families of the people who work for us. Yeah, it's that impactful. My business was assured to be impacted early and hard, your business will have different impact radius but I promise you, it's coming.
Don't be discouraged at this step. It is necessary for your team to have the fear, and to use that fear to fuel the next steps. So basically, if you are afraid, at least you know you have completed steps one and two, and can now move to step three.
Step Three: A business vision
From here you have a question to answer - are you going to eat the bear or is the bear going to eat you? We set out to redefine our business. We could sit around and wait for someone to do that for us (and risk irrelevancy) or we could decide what the business would be like. We educated ourselves, we immersed ourselves in Silicon Valley, we tinkered with all the tools. We correctly anticipated that the world of software development was going to radically change.
You and the bear sitting down for dinner wondering who is eating whom.
We decided that in the new world of software development, we just wouldn't need as many roles. If the basic work of every traditional procession could be reliably automated, then what would be left? We call it a paired product team. Instead of a team with multiple specialized roles, we envision one super-powered "product" role called the value engineer, paired with a supercharged chief AI engineer. This team just runs a straight kanban approach, agile is simply too long. In a world where you can go concept to cash in days, who has time for a two week sprint?
Now, this is an end state vision. We still have all kinds of teams, which is a good place to introduce the concept of what we call "interskilling", rather an upskilling or reskilling. This is the concept that we need to engage our workforce in a way that allows them to straddle both the now AND the AI future at the same time. I have multiple types of teams, configured for however we need to operate. I have traditional scrum teams, paired product teams, and even waterfall teams.
We have to be able to operate in whatever model works, and different models work for different environment and client ecosystems. On the one hand agile is dead. On the other hand, agile is alive and well-ish, and will be for some time.
Step Four: The fumbling around phase
Great! We have a business vision and we've embraced our fear. Now we enter the fumbling around phase. This is where we figure out all the things we don't know. Our teams are all homegrown. We have these incredible team members and leaders, why would we want to go outside for genAI talent? Furthermore, where would we even find this talent? At the time it was super-scarce (still is) and we believed we could do better on our own.
So we puttered and tinkered. We tried tools. We made a lot of CustomGPTs. We figured out what a ragbot was and built a lot of them. We threw away a lot of stuff. We bumped our heads against the wall numerous times. We got frustrated and enjoyed big and small victories. We navigated what Ethan Mollick and the Harvard Business School have called the jagged frontier.
Step Five: A failed attempt at reskilling
Enter the first failed attempt at reskilling. I created some awesome PowerPoint slides and called it a bootcamp. We waved our AI flag, held the bootcamps, and waited for the transformation. Then we wondered why the transformation wasn't happening.
Turns out attendance is not the same as application.
Yes there will be PowerPoints and flag waving. I have personally created about 500 power point slides, and that was very helpful (and very expensive - see next step). And I can honestly say the slides are awesome. But it's not about the PowerPoint slides. It's about the application of the learning. It's about finding and embracing the new way of thinking. The slides are just a step along the way. We do demos. We show people how to build things. We explain things. We explain things again in a new way.
Step Six: An expensive investment
We spent a lot of money on a dedicated AI team. We spent a lot of time creating slides. We spent a lot of time going to conferences to learn. We spent a lot of money on education. We spent a lot of money on AI tooling and subscriptions. (I won't tell you how much my monthly personal spend on AI is - it's to ridiculous to admit). I can tell you from personal experience it is way more expensive than you think it will be. We continue to spend a lot of money on our team members, and it's all worth it.
Be ready to spend if you really want to be ready. And not just on tech, especially not just on tech.
Step Seven: Light at light at the end of the long tunnel
By now, things started to click. We started to see it working. It was important at this moment to reflect. Maybe this is what Kotter means by the small wins. We set out on a journey to get somewhere, and we go somewhere. We started to see it working. I just knew there was a better way in December of 2023 and there was. We dedicated and re-dedictated ourselves to the journey. Even through, perhaps especially through, extraordinarily difficult times.
Midjourney always does better with image gen. Prompt: light at the end of a very long tunnel.
2024 was a year of extreme financial pressure for our company. We lost a huge contract (well, we didn't really lose it, but that's a story for another article). We had to do layoffs for the first time. It was horrible. And through it all, we continued to invest in genAI. Just because times were hard didn't mean we could give up. Especially becasue the times were hard, we had to stay the course. Giving up would just mean certain defeat and we simply were not going down without a fight. There's a reason our motto is "pivot or die" and we really mean it.
Step Eight: A business revision
We didn't quite get it right the first time. Or the second time. I've had lots of bad ideas over the course of a 25-year career. I can say, however, that I tend to have more good ideas these days. Our original idea was to build a product that would automate the software development process. Good idea. Great idea even. However, if we'd gone that route we would have been squashed like a bug. There is so much money behind this problem, bilions and billions of dollars.
Software development acceleration leaderboard. Good thing we didn't stay here.
As we listened and learned to the industry, we adapted our ideas, our product vision, our services strategies in ways that better aligned to our particular market needs and wants. Thank goodness we did. And thank goodness for Leslie Peeler and all the great partners at Cenlar FSB, as well as David Upbin and the Mortgage Bankers Association, for walking with us on this journey.
This part of the process is about having the courage to look at what we think we know and check ourselves. We have to have the courage, especially in the AI times, to open up decisions we've made when new information surfaces. It's easy to get strategy whiplash so this is a balance. Stick to your guns. Be unwavering in our purpose and our vision, and also keep our eyes open. Things are changing so fast, and so often. Eyes up guys, eyes up. The "why" won't change, but the how definitely will.
Step Nine: Real reskilling/interskilling success
My two hour PowerPoint, no matter how enthusiastically I delivered it, did not transform my workforce. Sorry guys. It probably won't work for you either. Since my first failed but well intended attempt, we've continued to try and I feel like we have hit our stride. What started as a two hour presentation has evolved into a five-day, in-person bootcamp that starts with the definition of AI and ends with each student designing and building their own AI agent using Claude Code.
In addition, we have created an AI operations team and set of objectives and key results (OKRs). Our AI ops team is a great balance and compliment to the value engineering we do, and helps drive AI into the fabric of the company. What we measure is what matters, and that's still true. AI ops is an emerging profession and practice that helps operationalize AI, track and optimize the total cost of AI ownership.
We have a relentless focus on application over attendance. Every who participates, then applies. Not just in the class but beyond. We use the things we build all the time. Our Chief Marketing Officer Michael Ramos build the application for our next AI exchange out of Replit.
And lastly, I encourage us all the focus on the concept of interskilling, rather than upskilling or reskilling. How can we get our workforce to work in concert with classic approaches, to be able to step back and forth between traditional and genAI methods?
Step Ten: Do it all again when everything changes
And then it's Tuesday and everything changes again... Every time I run a course I have to update for new learnings, new stories, new tools, new ideas. I'm known to say that today is the worst the tech will ever be. It only gets better from here (except for GPT 5, that was definitely a step back).
So that's it - ten not very easy steps to achieve AI workforce transformation. Please join us at our next AI exchange to hear more and engage in the conversation. It's hype-free, real world, and we had a great time last time.
By Tela G. Mathias, Chief Nerd and Mad Scientist at PhoenixTeam, CEO of Phoenix Burst