Insights

Perspectives on AI, technology, and compliance transformation to help you move faster, smarter, and with more clarity.

featured insights

January 6, 2026

How to Get Agents with Agency

What even is agency, amitrite? To me, it has now become this word that I've heard so many times it has lost its meaning as a real word. (Apparently this phenomena is called “verbal satiation”). But I digress. Let's start this off with the definition of agency, which I had to look up to feel confident.

The have agency is to take action and make choices that influence outcomes. To same “something has agency,” means it can decide between options (even if the options are limited), act intentionally, and cause effects in the world.

This can be contrasted with just waiting for things to happen. In my classes I use the metaphor of being proactive (agentic) as opposed to reactive (assistive). Responding to a request as opposed to proactively pursuing an objective. Having spend a lot of time in the last quarter building and deploying agents, here are some key concepts that may be useful for you on your journey to create agents that actually have agency.

Agency = Context + Planning + Memory + Tools + Action + Adaptivity

Context

Context engineering is the discipline of deliberately designing, assembling, and managing everything an AI model sees before it responds. You can think of it as the fuller realization of what started out as prompt engineering. Prompt engineering is what you say, context engineering is what the model knows, remembers, assumes, is constrained by, and is allowed to do at any moment. Yeah, it's kind of a Big Deal.

Consider an agent whose scope it to correct misapplied payments and waive late fees, when appropriate. "Context" in this use case might include:

  1. Information about the loan from the system or systems that have it including prior payment data from the servicing system.
  2. Information the customer might provide about what happened expressed as a complaint or inquiry.
  3. The specific circumstances around the one payment in question (i.e. "the facts").
  4. The rules (constraints) about what has to happen to a payment when it's received.
  5. The last few things that happened and the new few things that are expected to happen.
  6. The specific span of control the agent has (what is it actually allowed to do).

All of this has to be engineered and cared for. The systems have to be known and able to be integrated with. The data has to be high quality and consistent, and conditions for bad data understood and designed into the workflow. From a testing perspective, we have to know the conditions we should expect, and also plan for the conditions we failed to anticipate.

Let's talk about data. The data informs the prompts informs the data. Rich data requires rich prompts, when the prompts don't "match" the data, we can get really bad results. When I set up a test harness for my agent, I have to have a good set of realistic data because I can assure you, the agent will find all the little warts and holes and the agents plan will have that bad conext baked right in. I find that I engineer the data over time as my understanding evolves. Then I revise the prompts. Then I learn new things about the data. Then I revise the prompts.... You get the idea.

Agents do stupid shit sometimes. Context engineering helps to improve the judgement of the agent to do less stupid shit less frequently.

Planning

On its surface, this is relatively straightforward - use a large language model (LLM) to make a plan to solve the problem. There is another aspect to planning, however, which is the actual orchestration of the workflow, knowing when to call tools, knowing when to call deterministic functions, and knowing the scope. In the context of our example, planning might include:

  1. What information is required before we can make the specific interaction plan to correct a misapplied payment and handle the associated late fee.
  2. An overall flow for handling this specific type of problem - classify problem at sayment related, verify identity of specific borrower, gather additional payment history data.
  3. The specific plan for the actual interaction - determine correct application of funds, apply funds, determine what to do with late fee, make recommendation to human in the loop.
  4. Take specific set of action steps to resolve problem.

So there are multiple aspects to planning - workflow orchestration (which may be a combination of deterministic and probabilistic steps), the actual recommended plan, the validated plan, and the resolution plan.

Memory

It's funny to come from a field where we have sought after statelessness for so long. Not to get too technical, as I am already way over my skis talking about things I don't understand well, but service-based architectures (the wave before now) was all about resiliency and moving away from monoliths. The idea that we can have one part of the system break, and the rest of the system stay in tact.

A stateless service tends to scale well, fail predictably, be easier to understand, and minimizes coordination complexity. This is helpful for us mere mortals. So where did the state actually go? It went into databases, event logs, workflow engines, and... humans. We had state, it was just distributed. Now we have this whole new way of making systems (the agentic way), and the key distinction is understanding and evolving state. And this is one of the critical parts of memory.

Back to our example of the agent that corrects misapplied payments and waives late fees. What does memory mean? It means:

  1. We know what came before, the payments that were made, the fact that there was or was not a late fee applied.
  2. We know what happened in the initial interaction, why the customer contacted us, the channel, how they "usually" interact, whether they were frustrated.
  3. We understand the original plan and the adapted plan based on the feedback from the human in the loop. Perhaps the original plan waived the late fee and the HITL overrode that and ultimately the agent did not waive the fee.
  4. The next time this consumer contacts or there is another "related" situation, the agent knows about the misapplied payment that happened in this event context.

So memory can be short term, long term, in the interaction, apart from the interaction, and can apply to the future or not. For those of you using Claude Code, this is a critical part of why Claude "compacts conversations". This is taking a cet of context that is too large or inefficient to pass each time and keep "in memory" that is subsequently compacted for continuous use. That compacted context is not typically available int he next session unless it is stored elsewhere, which is a conversation for another article.

Tools

Looking at Claude Code, there are hundreds if not thousands of tools. Claude gives you clues to this when it is thinking as it notes the number of tools it used in a particular step. i've started to pay attention to this. Tools are just what they sound like - a tool is any capability the agent can call to do something in the world beyond just generating text.

In our example context, tools might be:

  1. Get payment history.
  2. Generate summary for human in the loop.
  3. Waive late fee.
  4. Store document.
  5. Create customer notification.

There can be lots and lots of tools, but keep in mind that the more tools there are, the more ways there are that things can do sideways with your agent. Yes, claude code has hundreds or thousands of tools. Anthropic's overall post-money valuation was reported at $183B in September 2025. It's safe to say they have way more money and talent to develop Claude Code that we do for whatever agents we are creating. Stay humble guys, keep it small and iterate.

Adaptivity

Two types of adaptation we are talking about here - adapting to environmental changes (new data being a big one) and adapting the human in the loop (HITL). I might also throw in adapting to predefined stopping points - sometimes called "stop hooks" - although that's not really adaptation, that's just executive a plan when a set of plan conditions are met. In any event, adapting to the environment and new or changed context is a critical part of agency. Something changes, and I need to change - I had a plan, now I need to make a new plan. On my own. Without being asked. Or, I need to stop and check in to make a new plan (this is a stop hook).

And then there is human-in-the-loop (HITL) adaptivity. Making a plan, having tools, and executing a plan is really meaningless if it's the wrong plan. In fact, it's worse than meaningless, it can be actually destructive and create harm for humans. Yes we want to provide good context engineering to prevent the wrong plan from being executed, and also, especially in mortgage, we will continue to have humans in the loop for many use cases for many years.

In our example, we might adapt to:

  1. New information - another payment comes in, a check bounces, we receive a bankruptcy notification.
  2. Changed information - turns out the loan type is actually different than what we thought, changing the rules for what we have to do.
  3. Human policy latitude - maybe the system says we shouldn't waive the late fee but in fact the human authority makes a different decision. We will, therefore, need to take a different set of actions (make and execute a new plan).

And it goes on an on. I find the best way to really see all this in action is to, well, make it happen. I encourage each of you to create your own agent, or reach out to me and I'd be happy to help you get started on your journey. And naturally, feel free to work with various partners in the industry (including us and the Phoenix Burst team) who already have agents available to do useful things in mortgage.

If you are looking for education, the Mortgage Bankers Association training calendar for AI is already up on their website, and we will be focusing a lot more on agents this year. Hope to see you in a class sometime soon. Happy building!

featured insights

December 13, 2025

Freddie Mac Bulletin and Executive Order Implications on Mortgage AI

December is not even halfway over and already we have two big developments on the AI housing policy front. I'm sure many of us have seen the buzz on the Freddie Mac AI bulletin and the executive order. Here's my take on what it means for us in mortgage.

Freddie Mac Bulletin Interesting Things

The first kind of interesting thing is the lack of mention of generative artificial intelligence. This implication here is the more traditional definition of artificial intelligence applies - my definition of artificial intelligence is that very broad field of computer science focused on enabling machines to perform tasks that typically require human intelligence.

One of the most interesting things about the bulletin is that that is does NOT say anything about use cases. It does obligate seller/servicers to furnish documentation on the types of AI/ML used, as well as the "purpose and manner" for such use - but it falls short of saying the industry must provide a use case list. Look here for a mortgage AI use case list. Look here and here for a description of AI types, especially those prevalent in mortgage. This is interesting to me because I think the use case list is kind of where the rubber meets the road in mortgage AI, I also think it's one place for differentiation in a mortgage companies' AI strategy. I appreciate that this was not called for as I think it's very special to each organization.

Another very interesting thing about the bulletin is the extent to which Freddie Mac's own technology will show up. Freddie Mac, of course, makes extensive use of AI/ML systems in its own stack so naturally this will show up in the documentation. I wonder if this means that seller/servicers will also have to test Freddie Mac technology for adherence to Freddie Mac's requirements.

Also interesting is the following statement "[l]egal and regulatory requirements involving AI are understood, managed, and documented" - I really wish this was easy. We have what we have "always" had to do from a process integrity and data privacy perspective, then we have the patchwork of state requirements (not new to us in mortgage). There is no decoder ring for the rules, and what I generally say is that we have to anticipate where the puck is going to go. I'm happy to see Freddie Mac give us a less gelatinous sense of the puck's destination. Continue reading for my take on the executive order that could change all this.

Article content
To the left of the December 2025 line is what I taught in my classes before last week.

I found the use of the term "trustworthy AI" as opposed to "responsible AI (RAI)" interesting. Upon further review this certainly harkens back to the NIST definition, which is very close to the RAI framework we teach in our classes. I will start using this term and framework instead. In terms of the trustworthy AI framework defined by NIST, please keep in mind that guardrails and evaluations are a central foundation of a good implemented framework.

Guardrails are technical and non-technical measures implemented to prevent unsafe, unethical, and unreliable use in production environments. Evaluations measure how well generative AI performs against expectations, helping to ensure outputs are accurate, relevant, and aligned with user goals.

Article content
The center of the cheeseburger is the NIST definition of trustworthy AI, the buns and sidebars are the author's own edition. A trustworthy AI framework is unevidenceable (is that a word?) without guardrails and evaluations.

The last thing in the bulletin I found interesting was the segregation of duties language. I'll be honest, I hadn't really thought about this before. In context it makes sense. This language is trying to prevent a very specific failure mode. Specifically, it's designed to help ensure that the people who benefit from using an AI system are not also the people who define the risk, “measure” the risk, and sign off that the risk is acceptable. Makes sense.

Specific Steps Mortgage Companies Should Take to Comply with the Freddie Mac Bulletin

  1. Incorporate your AI/ML uses and applications into your policy and procedures. Create a matrix articulating same so you can provide it to Freddie Mac upon request. I would also strongly suggest creating or updating your use case matrix. It's not specifically asked for but it's a good practice. Take a risk based lens (add that as a column in your matrix).
  2. Review current governance process for AI/ML systems and ensure you have the right operational structures in place for accountability and prevention of conflicts of interest.
  3. Get your guardrails and evals in place. I cannot emphasize this enough. This is the key to trustworthy AI. We do not trust the foundation model providers and AI vendors to just "take care of it" for us. We must verify. The accountability expected (and expressed in the form of indemnification) is not really new, as always as a seller/servicer - you are responsible. As the former CFPB director said "there is no special exemption for artificial intelligence".
  4. Get your AI controls defined and risked mapped. This is central to having a validateable control and security framework that you should already have.
  5. Monitor (and store evidence of that monitoring) your AI systems. This will be trick for those of us using foundation models (so basically all of us). I'm unsure what the expectation is from Freddie Mac on the testing and monitoring required for things like Microsoft CoPilot and enterprise deployments of foundational models. Is the expectation that we will do our own testing of these platforms for data poisoning and adversarial inputs? That's a pretty heavy load for anyone really, and requires a pretty hefty dose of technical skill in addition to access that won't be given.
  6. And finally, I'd suggest reaching out to Freddie Mac to discuss the implication of this policy and how you intend to implement it. It's a great group out there and I'm sure they would be happy to engage.

December 11 Executive Order Interesting Things

Definitely the most interesting thing about the executive order (to me, anyway) was the statement that the administration intends to "initiate a proceeding to determine whether to adopt a Federal reporting and disclosure standard for AI models that preempts conflicting State laws". Personally, I would welcome a set of requirements for appropriate use of AI systems. Right now, it's really hard to know what to do. Anticipating where the puck is going to go puts a major damper on innovation. Many companies are paralyzed by the lack of clarity, which stymies creativity. I don't think it would be so bad to have a decoder ring. I acknowledge that I may regret this statement in the future.

Also interesting to me is the extent to which these two sets of guidance were released in coordination with each other. I have to think they were reviewed in concert. One does not conflict with the other, but if the Freddie Mac bulletin had been produced by a state government, I wonder how it would be reviewed under the executive order.

Specific Steps Mortgage Companies Should Take to Comply with the Executive Order

Nothing really to do here except watch and wait. Of course, it's a great idea to keep the lines of communication open with your state examiners and regulators, see what they are doing and thinking about it. Until something changes, there are about 150 distinct AI-related laws, ordinances, and legislative proposals out there that we should understand and determine for ourselves if they apply to us.

By Tela G. Mathias, Chief Nerd and Mad Scientist at PhoenixTeam, CEO at Phoenix Burst

featured insights

December 8, 2025

Eleven Reasons Why the Mortgage Industry Isn't Further Along with GenAI Adoption

I got asked a really excellent question yesterday: "Why isn't the mortgage industry further along with genAI adoption?". I really should have had a better answer, given that my one job is to help the industry adopt genAI and I pretty much eat, sleep, and breathe mortgage AI. I just hadn't really sat down to formulate my thoughts, and my answer was pretty meh. So I had time this morning to do a better job. Here's my take, in no particular order.

#1 - It's actually really hard to do at scale

It's hard. I have a commercial product in addition to observing and partnering with organizations to scale genAI solutions and it's just really hard. The tech can be fragile. There is an enormous amount of error analysis to do if you want to get it right. Mortgage is flooded with choices (so many amazing genAI demos by vendors, so few credible evaluation results), creating decision fatigue. There is so much to learn and figuring out the core tech is challenging.

#2 - The mortgage technology ecosystem

The industry technology ecosystem is still sandwiched between tech that was already aging, and the constellation of wrap and ancillary applications that sprouted up post 2008. The ecosystem is unbelievably complex, numerous workarounds and control reports supplant humans that have to make the tech work to get things done. We have multiple lines of defense used to verify that the technology has done the right thing. Often, we see four lines of review for one decision. That's really challenging to integrate with.

#3 - Fear of getting it wrong

Pre-genAI there were already hundreds of thousands of rules to implement to make and service a mortgage. Maybe 150,000 rules, and at least a million pages of documents to comply with. And that was before genAI. The consequences of "getting things wrong" were already high. Fines, buybacks, consent orders... not mention the financial and emotional costs to the homeowners. Getting it wrong is a big deal. Now enter genAI. It will never be 100% correct. It just won't, that's not how it works. In an industry where perfection is the standard (even though the humans in the process are not perfect) it's hard to introduce technology that is not rules-based.

#4 - The people set the pace

I often make the mistake, like many of us do, of thinking that everyone thinks like I do. I eat new technology for breakfast. I thrive in uncertainty. I enjoy the pressure created when the stakes are high. I like change, it keeps things interesting. This is a very myopic way of thinking. Everyone does not think like I do, and what a boring, chaotic world that would be if they did. Each human on this AI journey is on, well, their own personal journey. We can buy all the tech we want, and eventually, even with AI, there's a person somewhere who has to use it or derive value from it. It's not the tech that sets the pace, it's the people. And frankly, I'm kind of grateful for that. AI headlines are kind of scary. Maybe the human adoption throttle is a good thing.

#5 - Talent gaps

Unless we have bajillions of dollars, the talent bar is so high that it's effectively unachievable. We are all looking for these unicorns - these super savvy, genAI native, AI experts who know mortgage and have people skills. All for like $150K. Guess what guys, not happening. So we all have to kind of fumble around to find the talent, grow the talent, partner to acquire the talent. It's just really hard. And it's really hard to actually tell where the tech really is. What can actually be implemented safely and at scale? You literally have to troll the developer community to see what the real deal is with agents. There are so many powerpoints. Who has time to sift through them all and then pressure test them? And then there's all the completely-unsexy-yet-utterly-neccessary error analysis. Guess what? That takes humans who really know the business. You know where those people are? Yeah, they are in the business.

#6 - The unrelenting pace of change

This one is really daunting, even for me and this is my whole job. The pace of change is unlike anything I have ever seen in 25 years of tech. I don't have a fancy silicon valley pedigree, but I sure spend a lot of time making up for it and I just can't keep up with every aspect of every potentially useful thing in the AI space. I can't go to every conference, and even if I could, it still wouldn't be enough. Just when we think we figured something out, there's the next new thing. RAG was it, then agents were it, then agentic workflow was it, now it's neurosymbolic AI. There is not an organization in the world of any size that can ingest this kind of change in an immediately productive way.

#7 - Competing priorities

We all have lives to care for, many of us have families to feed (or at least a cat or houseplant). We are all running a business in one of the most uncertain times of our country's history. We all want to create time and space to experiment and learn. But we still have beans to count, and we only count two types of beans in mortgage - heads and dollars. That's just the way it is. So the pressure to generate revenue or reduce expense is absolutely unrelenting. And that's not going to stop. It takes a truly rare executive team with the emotional and intestinal fortitude to invest what it takes to figure all this out.

#8 - Hallucination rates

Then there's just the basic fact of hallucination rates. It's a thing. In order to produce the truly fantastic results we can get out of a large language model, we must be able to accept variability - and that variability can be inaccurate. Say it with me now folks, this is probabilistic technology. If you need 100% accurate answers 100% of the time, genAI is not for you. But I will challenge the idea that we actually need 100% accurate 100% of the time in mortgage. Mostly because I know with 100% certainty that we don't have it today. This requires what Sequoia has called a stochastic mindset shift (I wrote about this here).

#9 - There’s no instruction manual

Yes, we operate in a completely rules based industry. But is it really? What about all that interpretation that we have to do to the federal rule set? What about all those VA circulars? Lender letters? We already don't have an instruction manual, and now we add still-in-the-oven paradigm changing technology to the mix. Talent gaps. No help from regulators or the white house. The state patchwork. It's a mess, and we take all the risk ourselves. We take all the learning on our own. It's just really hard.

#10 - Organizational inertia

Moving an organization of any size (even a small size) is hard. We just are ingrained in how we do what we do. Every system is perfectly designed to produce the result it produces. We have settled into a way that we understand, doing a thing we know. The weight of what we have built, especially in mortgage, holds us down. 2008 crushed us all. That's where these four lines of defense came from. TRID crushed us all. It literally cost tens to hundreds of millions of dollars to implement change of that size. Everything about our organizations are optimized to carry that weight. We are stuck, all of us.

#11 - Serious resistance to process reimagination

I added this one after thinking through this for a while, so unfortunately my top ten list is now a top eleven list. So much for clickbait. It's very hard to reimagine what we do. Even if we can reimagine it, then we have to make it real. A great example is the regulatory change process. The true cost of regulatory change is not tracked, not really. It is a fantastically distributed process that touches every single part of our organizations and all the technology and people involved. From the attorney that summarizes the change, to the tester working for a third party vendor who implements a piece of code the changes a calculation. No one counts all those steps. We don't know what it really takes, what it really costs. And if we don't know how it is, it's very hard to know how it could be.

So there you have it, faithful readers, if you're out there. My top eleven reasons why we are not further along with this genAI thing. Do not despair, however, the change is here and the time is now. We'll get through it eventually, we always do.

By Tela Mathias, Chief Nerd and Mad Scientist, PhoenixTeam | CEO, Phoenix Burst

featured insights

November 17, 2025

Looking for ROI in All the Wrong Places

I recently wrote about the now infamous and largely eye-rolled MIT study. You'll remember it's sensational headline that 95% of organizations are getting a zero percent return on their genAI investments. Yikes. I won't rehash that article here, but I will offer an alternative that didn't get quite so much press coverage, and that is a three-year longitudinal study published annually by the Wharton school that had a very different conclusion.

Author's note - I have created a publicly available Google Drive for all the interesting things I find that seem useful for sharing. You should be able to access it, let me know if you cannot.

Article content
Wharton's slightly more scientific study found that of 800 leaders surveyed, 75% report positive ROI from their genAI investments. Hmm.

One of the headlines of the Wharton study was that "most firms now measure ROI, and roughly three of four already see positive returns". Darn-it, the ROI problem continues to be devilishly confounding. This genAI business continues to be such an emotional roller-coaster. So let's dig in, and then I give you my $0.02 on what it means to us in mortgage.

Methodology

This one was slightly more scientific, it was based on 15 minute online "quantitative" surveys of 800 leaders, and the study has been repeated for each year beginning in 2023, which I have to say was very forward looking of Wharton. I put quantitative in quotations, because while I am sure the data they gathered was numerical, the source of the information was from people describing their experience and, therefore, qualitative.

Article content
Definitely bigger than the MIT study, and assessed over a longer time horizon. Still qualitative and rich with opportunities for bias.

Main Observation #1: GenAI Usage is Now Mainstream

There was a lot to this first observation that was very validating. I haven't seen a good study on adoption since the Deming study from late 2024 (there is still no better one, unfortunately). Yes, genAI is mainstream. Yes, 46% of leaders are using it everyday and 80% are using it at least weekly. Still, of note, one in six executives surveyed were not using it (I commend them for their honestly). Those executives and companies are in for a cold awakening if they don't join us on the AI train.

Very interesting to note that "practical, repeatable use cases supporting employee productivity" see the most adoption, with IT, legal, and HR being the furthest ahead. I continue to believe that adoption is good, employee productivity is a great place to start, and also the truly differentiating companies will be making their inroads much deeper into operations. Unsurprisingly, the study finds that operations as a business area is among the furthest behind. Well yeah, it's the hardest.

Main Observation #2: 75% Respondents Reported Postive ROI

One thing holding us all back, that the study found as a positive, is the idea that "accountability is now the lens". Yes - we need to find the value for sure, and also we are still early. Part of the value is in the learning, and the struggle. We can easily get stuck in a spiral of ROI purgatory when we load up a small set of narrow use cases with the full spend of getting started.

Budgets are moving from "one-off pilots to performance justified investments" and budgets are being moved from existing cost centers to genAI adoption. Again, yes, we need to justify investment, but I'm telling you it should just be one lens, not the sole picture.

One of the headline conclusions here is "budget discipline + ROI rigor are becoming the operating model for genAI investment". This to me is a sign that bean counters (absolutely not judging the bean counters, love the bean counters) are winning the executive perspective.

Main Observation #3: Culture is the Adoption Throttle, Not Technology

This was the most interesting section of the study for me, by far. "People set the pace"... I love this, and I could not agree more strongly that this is true. The gap between how we are using genAI (the "everyday AI" that is mainstream) and what it can actually do is a massive chasm.

We are seeing that almost 70% of leaders reported they have some kind of a Chief AI Officer role, indicating that accountability for AI adoption has moved into the C-suite. At the same time that confidence in AI and it's ability to provide value grows, capability is falling short.

"Capability building is falling short of ambition. Despite nearly half of organizations reporting technical skill gaps, investment in training has softened, and confidence in training as the primary path to fluency is down. Some firms are pivoting to hiring new talent, yet recruiting advanced genAI skills remains a top challenge."

Unless you have millions of dollars, recruiting advanced genAI skills is absolutely impossible, especially in mortgage. The only way to get it is to home grow it, or partner for it. And when partnering is chosen, it needs to come with a coaching and "teaching by doing" component. I continue to believe that every organization needs to have in house genAI talent, and that means investing in general education, applied education, education that favors application over attendance. Oh, and learning and applying new ways of thinking.

Article content
"Cartoon style scene of several friendly bear cubs swirling playfully around glowing streams of futuristic technology - floating holograms, data ribbons, and soft light particles..." Created on Midjourney

This is where we see that "people set the pace". In the past few weeks at PhoenixTeam, we have been up to our eyeballs in Phoenix Burst product adoption and workforce enablement, both separately and together. At the heart of adoption is the people. Literally. There is no adoption without the people. I can be guilty of this myself, we focus so much and so hard on the tech and the product and the experience, and we lose sight of the hearts of the people. The fear. The lack of trust. The domain of "change management".

The study echoes "the human side remains the bottleneck and a key potential accelerant. Morale, change management, and cross functional coordination remain persistent barriers. Without deliberate role design, coaching, and time to practice, 43% of leaders warn of skill atrophy, even as 89% believe genIA tools augment work.

What does this mean for us in mortgage?

I had a great friend say to me last week about adoption struggles, "perhaps it's FOAK?", which I had to look up. (Embarrassed to admit it but here there it is, authenticity is the best route to achieve authentic human connection).

Article content
Thank you chatGPT for once again helping me learn and put words to things I knew but only felt.

And yes, tying back to the study, for those of us that are working on really challenging areas, where the metrics are not well established, where the tasks are not necessarily repeatable, where the path ahead has to be redesigned -- we are running into FOAK. Taking a step back to think about the people, and learning from what they are telling us is really important. And forging ahead, being adaptable, being resilient, and staying positive.

In no particular order, what I think all this means to us in mortgage:

  1. When we invest in applied learning, the people adapt better. I suggest that mortgage leaders look at their talent strategies, really think about what it means to the people, and create tailored learning strategies based on real change and thoughtful consideration about what is actually changing about actual jobs.
  2. Remember, whatever feedback you are getting, it's valid. Even if you don't agree with it. That is an opportunity to open our minds to what someone else is thinking, doing, and feeling. Their experience is their experience. I suggest that mortgage leaders inspect their own feelings, inspect how they are using genAI (if at all), and then talk to others at all levels about their experience. It's probably won't be the same.
  3. If you want fast adoption and fast ROI, go with the easy stuff. Just know that the easy stuff is commoditizing faster than you know, and differentiating on fast and easy isn't a thing.

There is so much happening out there, it's very hard to keep up. I hope you will reach out and share what you are learning and applying, what's working and what's not. We are listening and trying to find solutions.

By Tela Gallagher Mathias, Chief Nerd and Mad Scientist at PhoenixTeam

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Artificial Intelligence

How to Get Agents with Agency

January 6, 2026

Artificial Intelligence

Freddie Mac Bulletin and Executive Order Implications on Mortgage AI

December 13, 2025

Artificial Intelligence

Eleven Reasons Why the Mortgage Industry Isn't Further Along with GenAI Adoption

December 8, 2025

Artificial Intelligence

Looking for ROI in All the Wrong Places

November 17, 2025

Artificial Intelligence

What does the infamous "MIT study" really mean to us in mortgage?

October 28, 2025

Our Company

Blue Phoenix Awarded $215 Million VA Loan Guaranty DevSecOps Contract | A PhoenixTeam and Blue Bay Mentor-Protégé Joint Venture

October 3, 2025

Artificial Intelligence
Our Thoughts

Ten Not Very Easy Steps to Achieve AI Workforce Transformation

September 29, 2025

Our Thoughts

MISMO Fall Summit Recap: Our Take on the Summit, AI, and the Road Ahead

September 25, 2025

No items found.

Towards Determinism in Generative AI-Based Mortgage Use Cases

September 22, 2025

Our Company

PhoenixTeam Achieves SOC 2 Compliance, Strengthening Security and Trust in Its Phoenix Burst GenAI Platform

September 9, 2025

Artificial Intelligence

My Journey with Claude Code and Running Llama 70b on My Mac Pro

September 4, 2025

Our Company

Tela Mathias recognized as a 2025 HousingWire Vanguard

September 2, 2025

Our Thoughts
Artificial Intelligence

Top Ten Insights on GenAI in Mortgage

August 25, 2025

Our Company

PhoenixTeam Awarded $49M Contract to Modernize USDA’s Guaranteed Underwriting System, Expanding Rural Homeownership

August 18, 2025

No items found.

Case Study: The Messy and Arduous Reality of Workforce Upskilling for the AI Future

July 28, 2025

Artificial Intelligence

What is uniquely human? AI impacts on the workforce.

July 7, 2025

Artificial Intelligence

251-Page Compliance Change in Hours, Not Months

June 20, 2025

Our Thoughts
Artificial Intelligence

The Medley of Misfits – Reflections from Day 2 at the AI Engineer World’s Fair

June 5, 2025

Our Thoughts
Artificial Intelligence

AI Engineer World’s Fair: What We’re Seeing

June 4, 2025

Artificial Intelligence

Departing from Determinism and into the Stochastic Mindset

May 30, 2025

Artificial Intelligence

The Agents Are Here and They Are Coming for our Kids

May 6, 2025

Our Company

Built for What’s Next: Welcome to the New PhoenixTeam Website

April 29, 2025

Artificial Intelligence
Our Company

Phoenix Burst Honored with MortgagePoint Tech Excellence Award for GenAI Compliance Innovation

April 7, 2025

Artificial Intelligence
Our Thoughts

From Trolling to Subscribing – An Alternative to Compliance Insanity

March 10, 2025

Artificial Intelligence
Our Thoughts

Supercharge LLM Performance with Prompt Chaining

February 24, 2025

Artificial Intelligence
Our Thoughts

From Program Management to Program Efficiency and Innovation

February 20, 2025

Artificial Intelligence
Our Thoughts

The Evolution of Service Level Agreements: Why AI Evaluations Matter in Mortgage

February 12, 2025

Artificial Intelligence
Our Thoughts

An Impassioned Plea for AI-Ready Mortgage Policy Data

February 3, 2025

Artificial Intelligence

The Role of Mortgage Regulators in Generative AI

January 28, 2025

No items found.

PhoenixTeam Announces Partnership with Mortgage Bankers Association to Offer GenAI Education for Mortgage Professionals

January 13, 2025

Artificial Intelligence
Our Thoughts

Calculating AI ROI in Mortgage: Strategies for Success

December 16, 2024

New Contract
Our Company

PhoenixTeam Awarded $5 Million Contract for HUD Section 3 Reporting System Modernization

November 25, 2024

Artificial Intelligence
Our Thoughts

The History of Artificial Intelligence in Mortgage

November 21, 2024

Artificial Intelligence
Our Thoughts

Adoption of GenAI is Outpacing the Internet and the Personal Computer

November 8, 2024

Artificial Intelligence
Our Company
Phoenix Burst

PhoenixTeam Launches Phoenix Burst — A Generative AI Platform to Accelerate the Product and Change Management Lifecycle

October 22, 2024

Artificial Intelligence
Our Thoughts

Application of Large Language Model (LLM) Guardrails in Mortgage

October 17, 2024

Artificial Intelligence

Where is GenAI Going in Mortgage?

September 9, 2024

Recognitions
Our Company

Tela Mathias Wins HousingWire's Vanguard Award

September 4, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Episode 9 | The Lindsay Bennett Test: A Live Assessment of Phoenix Burst with a Product Leader

July 19, 2024

Artificial Intelligence
Our Thoughts

A Practical Approach to AI in Mortgage

July 18, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Episode 8 | Inside Phoenix Burst: Transforming Software Development with AI

July 11, 2024

Accessible AI Talks
Our Thoughts
Phoenix Burst

Accessible AI Talks | Episode 7 | Leading a Gen AI Team to Production

July 10, 2024

Artificial Intelligence
Our Thoughts
Phoenix Burst

AI Reflections After Getting Lots of Feedback

June 21, 2024

Our Company

PhoenixTeam Ranks Among Highest-Scoring Businesses on Inc.’s Annual List of Best Workplaces for 2024

June 18, 2024

Our Thoughts

MISMO Spring Summit 2024: Key Insights and Takeaways

June 18, 2024

Our Community

Join Us in Making a Difference: Hope Starts with a Home Charity Drive

June 3, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts

Freeing the American People from the Bondage of Joyless Mortgage Technology

June 3, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Part 6 | The Role of AI in Solution Design

May 25, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Part 5 | The Problem of Product Design

May 22, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Part 4 | The Problem of Requirements

May 8, 2024

Artificial Intelligence
Phoenix Burst

What is a value engineer?

May 7, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Part 3 | Problem of Shared Understanding

May 7, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Part 2 | The Imagine Space and More

May 6, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Part 1 | Introduction with Guest: Brian Woodring

May 6, 2024

Artificial Intelligence
Our Thoughts
Phoenix Burst

Storyboards Matter: Three Insights for Using AI to Accelerate Product Design and Delivery

May 3, 2024

Our Company

PhoenixTeam Designated as One of MISMO's First Certified Consultants, Shaping the Future of Mortgage Industry Standards

April 24, 2024

Artificial Intelligence
Our Thoughts
Phoenix Burst

The Two Rules of Gen AI

April 18, 2024

Artificial Intelligence
Our Thoughts
Phoenix Burst

Three Simple Steps to Kickstart your AI Journey Today

April 16, 2024

Artificial Intelligence
Our Thoughts
Phoenix Burst

The Peanut Butter and Jelly Sandwich AI Experiment

April 16, 2024

Artificial Intelligence
Our Thoughts
Phoenix Burst

Starting Our AI Journey

April 10, 2024

Our Company

PhoenixTeam CEO and COO Make Inc.’s 2024 Female Founders List

April 10, 2024

Our Company

PhoenixTeam has earned its spot on Inc. 5000 Regionals: Mid-Atlantic

March 4, 2024

Our Company
Our Thoughts

Key Insights and Takeaways from MISMO Winter Summit 2024

January 29, 2024

Our Company
Our Thoughts

Leader of the Year Interview with Jacki Frazer

January 18, 2024

Our Company
Our Thoughts

Why Phoenix - Shawn Burke

December 28, 2023

Agile
Our Thoughts

PhoenixTeam at Agile + DevOps East 2023: Key Insights and Takeaways

November 17, 2023

Spotlights
Our Company
Our Thoughts

Why Phoenix — Vicki Withrow

November 3, 2023

Our Thoughts

Breaking Barriers: The Extraordinary Woman Who Redefined Workplace Equality

October 20, 2023

Our Company

PhoenixTeam: In the Arena

October 12, 2023

Our Company

PhoenixTeam: In the Arena

October 12, 2023

Agile
Our Company

Defining the Word “Done” The Phoenix Way

September 14, 2023

Product
Our Company

PhoenixTeam Approved as DOD SkillBridge Partner to Help Active-Duty Military Service Members Re-Enter Civilian Workforce

August 18, 2023

Recognitions
Our Company

PhoenixTeam featured on Inc. 5000 List of America’s Fastest-Growing Private Companies for the 4th Consecutive Year!

August 15, 2023

Agile
Our Company

Military Veteran’s Transition from Active Duty to Civilian Life as Lean-Agile Methodologist and Coach

August 1, 2023

Product
Recognitions
Our Company
Our Work

Veteran Founded Technology Venture Blue Phoenix Expands Reach with GSA IT-70 Award

July 24, 2023

New Contract
Our Company
Our Work

PhoenixTeam Begins New Partnership with HUD for FHA Catalyst

June 9, 2023

Our Company

PhoenixTeam is Excited to Announce Becky Griswold as its Newest Partner

June 1, 2023

Product
Our Work

PhoenixTeam proves value realization begins with product discovery

March 30, 2023

New Contract
Our Company
Our Work

PhoenixTeam Strengthens Partnership with U.S. Department of Agriculture

March 29, 2023

Recognitions
Our Company

PhoenixTeam Featured on 2023 Inc. Regionals Mid-Atlantic for Third Consecutive Year

February 28, 2023

Recognitions
Our Company

PhoenixTeam Announces 2022 Annual Company Award Winners!

January 30, 2023

Recognitions
Our Community

PhoenixTeam Ranks #15 on Washington Business Journal’s 2022 Fastest Growing Companies

October 21, 2022

Recognitions
Our Community

Fortune and Great Place to Work® Rank PhoenixTeam #29 2022 Best Workplaces in Technology™

September 7, 2022

Recognitions
Our Company

PHOENIXTEAM FEATURED ON INC. 5000 LIST OF AMERICA’S FASTEST-GROWING PRIVATE COMPANIES

August 16, 2022

Recognitions
Our Company

Fortune and Great Place to Work® Rank PhoenixTeam #53 2022 Best Medium Workplaces™

August 8, 2022

Recognitions
Our Company

Fortune and Great Place to Work® Rank PhoenixTeam #29 2022 Best Workplaces for Millennials™

July 18, 2022

Recognitions
Our Company

PhoenixTeam Ranks Among Highest-scoring Businesses on Inc. Magazine’s Annual List of Best Workplaces for 2nd Consecutive Year

May 10, 2022

Recognitions
Our Company

PhoenixTeam Featured on 2022 Inc. Regionals Mid-Atlantic for Second Consecutive Year

March 15, 2022

Our Community

PhoenixTeam Goes Pink for Breast Cancer Awareness Month

October 15, 2021

Our Company

PhoenixTeam shows up strong at the MISMO Fall 2021 Summit

October 5, 2021

Recognitions
Our Community

PhoenixTeam makes the 2021 Inc. 5000 list for 2nd consecutive year!

August 17, 2021

Salesforce
Our Work

PhoenixTeam is Now a Salesforce Partner!

June 30, 2021

Our Company

Introducing the newly designed PhoenixTeam Website

June 29, 2021

Recognitions
Our Company

PhoenixTeam Ranks Among Highest-Scoring Businesses on Inc. Magazine's Annual List of Best Workplaces for 2021!

May 12, 2021

Our Company
Our Thoughts

The Importance of Continuous Learning for Team Members

April 20, 2021

Salesforce
Our Work

PhoenixTeam’s Three Pillars to Successfully Implementing Salesforce

April 6, 2021

Salesforce
Our Work

PhoenixTeam’s Three Pillars to Successfully Implementing Salesforce

March 9, 2021

Salesforce
Our Work

PhoenixTeam's Three Pillars to Successfully Implementing Salesforce

February 22, 2021

Recognitions
Our Company

Tanya Brennan Recognized with 2020 Lending Luminary Award

October 16, 2020

Our Thoughts

Get to Know Your Customers

October 15, 2020

2

Accelerate Your Operations with AI-powered Expertise

Let’s Talk

Stay Updated.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
© 2025 PhoenixTeam. All rights reserved.   |   Privacy Policy   |   Terms of Use