Insights

Perspectives on AI, technology, and compliance transformation to help you move faster, smarter, and with more clarity.

featured insights

January 20, 2026

Leading When Digital is Cheap and AI Slop is Everywhere

It's hard to create impact in a sea of slop. (Coined by tech journalist Casey Newton, AI slop is the term for the flood of low quality, AI generated content created rapidly to attract eyeballs and sell or promote things). One of the effects of this AI slop era is to diminish the value of online and offline content. What we see and read online is now subject to a new critical lens of "is this real?" and "was this generated by AI?". This effect is carrying over to offline content as well - if everyone is an expert, who and what can we rely on?

Last week we closed out the third AI exchange in our series on AI's impact on the workforce. One of the questions I wanted to answer was how to lead when digital is cheap and AI slop is everywhere. What's different about an AI-fluent leader? How do we find and amplify the things that matter for teams of humans and machines, working together? How do we help a group see clearly in a sea of bland and remixed messages?

Big caveat - I can only speak from a place of my own lived experience so this writing is from that perspective. Different leaders do different things and can be equally or more successful, I just offer my $0.02 for what it's worth.

Leadership

I went down a frustrating leadership rabbit hole last week preparing my opening remarks for the AI exchange. I figured I would find an quick and easy to lean on definition of leadership. After about 30 minutes I settled on the definition of leadership as helping a group to see clearly, choose a direction, and move together, while owning the consequences. Leadership is a quality a person can have, and thing a person or people can do, as well as a process.

AI-Fluent Leadership

Unlike genAI-native technology (which I'll shorten to AI-native for this article), we don't have truly "AI-native" leaders yet since the AI-natives are only about three years old right now. I'll define AI-fluent leadership as:

Helping a group see clearly when AI (and AI slop) is an integral part of everything we do, choose a direction that will be informed by both humans and machines, and move together with people that have widely variable levels of AI-fluency, all while owning the consequences of what humans and machines do.

AI-Fluent Leadership Actions

I am, or am at least trying to be, an AI-fluent leader. Maybe not a good one, but that is the subject for another day. So what has changed about what I do? The first and most obvious change in my actions is that I use genAI all the time, every single day, constantly. I use it to create new things, accelerate my work, and amplify my creative impact. I have become supercharged, I can do all the things I used to be able to do at an even more vigorous level.

I also build things all the time. At least once a day I am in Claude Code building something new, or tweaking something I built already. I also use the things I build. Not all the things, sometimes I build things that turn out not to be useful, but this has been perhaps the most profound change in my way of working. Later in my career, especially in the last ten years or so, I became a pretty prolific creator. GenAI has now unlocked the inventor and individual builder in me.

For better or for worse - I write all my own content. And it's really time-consuming. I use writing to crystallize perspectives, consider alternatives, and to learn. Inevitably I have to research, pull threads, maybe try something new - that's just my process. I think it's important to have our unique voice, and have the voice come through. I don't even use genAI to edit, as I have come to believe the occasionally typo is a good thing, it shows I'm not a bot.

AI-Fluent Leadership Perspectives

I now believe that anything we can imagine, we can make. We can have an idea, and bring that idea to life in a day. The only limits are the limitations imposed by physical reality, and even that is changing with accelerated computing and embodied AI (think robotics). Even if I can't do it, could a robot do it? As Jensen says, when we take the effective cost of something to zero or approximately zero, previously unimaginable things become possible.

I also believe that quality matters way more than it used to, and this is because digital has become so cheap. Creating "good enough" is now trivial, which means "good enough" isn't good enough any more. It has to be great. A side effect of this is the important of offline results. Actual physical things. Yes the world has gone paperless, and I'm going paperfull. I'm like the salmon fighting the current. I want things to matter more, and I believe a part of that is taking the time and putting in the effort for a quality physical thing. I want to create feelings and memories that stands out.

My nine-year old daughter believes there are no mistakes in art, just happy accidents. This to me is an absolutely perfect manifestation of the beginners mindset, and I love it. I believe in happy accidents, there is always learning in failure, and sometimes failing is succeeding. Not always, but a lot of the time.

AI-Fluent Leadership Expectations

I expect more from myself as a leader and from my leaders. I expect us to be able to move faster. I expect us to be able to scale out quickly. It's probably not a realistic expectation, but I expect leaders to be able to move at something at least approximating the pace of genAI. We talk about being on genAI time - where a day is a week, a week is a month, and a month is a year. I expect us to move with that kind of intensity.

Ethan Mollick talks about the jagged frontier, and I expect AI-fluent leaders to push the jagged frontier all the time. One of the most frustrating challenges in this AI future is bumping up against that boundary - that thing that should be possible but just isn't. Even more frustrating is the fact that the impossible could become possible at any point, and so we have to keep trying. What is possible isn't settled - just give it a week.

This AI future is hard and it moves fast. I do expect all this to take time, which I realize it counter to my first point (but as I often say - two opposing things can be true at the same time). I expect that we are all works in process and we have to give ourselves grace. We have to give others grace as well. We have to keep trying and learning from those happy accidents.

AI-Fluent Qualities

Bob Sternfels, the global managing partner of McKinsey described three qualities that are required to succeed in the AI future. I'm not a huge McKinsey fan, but I did find his remarks insightful. He talks about what the models can't do and that these uniquely human characteristics are the differentiating qualities that have already increased in significance post mass availability of genAI. They were really good.

The first is aspiration, setting the right ambitions and getting others to believe. The second is judgement, being able to know what is right and what is wrong when there are no easy answers. And third, he describes true creativity as contrasted with statistical remixing.

AI-Fluent Leadership Worries

One of the most significant pitfalls of AI-fluent leadership is the risk of creating a two-tiered workforce. I don't have an answer here, and I'm not even sure I have the right questions. As a society, do we have an obligation to create pathways for workers who are not AI-fluent and do not intend to become AI-fluent? Is the answer to that question different for each of us as leaders in our organizations?

As a leader at my company and in my industry, for at least the next ten to 20 years, we will have these pathways. We have clients that are not AI-fluent, and really haven't started their journey. We have and will continue to meet them where they are. We also have clients that are trailblazing in the AI future. We have and will continue to meet them where they are as well. And for all clients, regardless of where they are on their AI journey, our goal is to deliver outcomes for them. That hasn't changed.

I worry about the entry level professional roles in my industry, and roles for people entering our workforce from non-traditional paths. We simply won't have the traditional starter jobs anymore. I turned 50 last weekend, and the entry level jobs from 25 years ago for people like me don't or won't exist anymore. I have four kids - ages eight, nine, 11, and 22. I believe that we need to both prepare them for the path as well as prepare the path for them. My job as a parent is to help them find their way, help them to be happy, kind, and financially independent (when possible). My job as an employer is to create entry level opportunities, even if it's a little fuzzy what those opportunities are.

By Tela G. Mathias, Chief Nerd and Mad Scientist at PhoenixTeam

featured insights

January 6, 2026

How to Get Agents with Agency

What even is agency, amitrite? To me, it has now become this word that I've heard so many times it has lost its meaning as a real word. (Apparently this phenomena is called “verbal satiation”). But I digress. Let's start this off with the definition of agency, which I had to look up to feel confident.

The have agency is to take action and make choices that influence outcomes. To same “something has agency,” means it can decide between options (even if the options are limited), act intentionally, and cause effects in the world.

This can be contrasted with just waiting for things to happen. In my classes I use the metaphor of being proactive (agentic) as opposed to reactive (assistive). Responding to a request as opposed to proactively pursuing an objective. Having spend a lot of time in the last quarter building and deploying agents, here are some key concepts that may be useful for you on your journey to create agents that actually have agency.

Agency = Context + Planning + Memory + Tools + Action + Adaptivity

Context

Context engineering is the discipline of deliberately designing, assembling, and managing everything an AI model sees before it responds. You can think of it as the fuller realization of what started out as prompt engineering. Prompt engineering is what you say, context engineering is what the model knows, remembers, assumes, is constrained by, and is allowed to do at any moment. Yeah, it's kind of a Big Deal.

Consider an agent whose scope it to correct misapplied payments and waive late fees, when appropriate. "Context" in this use case might include:

  1. Information about the loan from the system or systems that have it including prior payment data from the servicing system.
  2. Information the customer might provide about what happened expressed as a complaint or inquiry.
  3. The specific circumstances around the one payment in question (i.e. "the facts").
  4. The rules (constraints) about what has to happen to a payment when it's received.
  5. The last few things that happened and the new few things that are expected to happen.
  6. The specific span of control the agent has (what is it actually allowed to do).

All of this has to be engineered and cared for. The systems have to be known and able to be integrated with. The data has to be high quality and consistent, and conditions for bad data understood and designed into the workflow. From a testing perspective, we have to know the conditions we should expect, and also plan for the conditions we failed to anticipate.

Let's talk about data. The data informs the prompts informs the data. Rich data requires rich prompts, when the prompts don't "match" the data, we can get really bad results. When I set up a test harness for my agent, I have to have a good set of realistic data because I can assure you, the agent will find all the little warts and holes and the agents plan will have that bad conext baked right in. I find that I engineer the data over time as my understanding evolves. Then I revise the prompts. Then I learn new things about the data. Then I revise the prompts.... You get the idea.

Agents do stupid shit sometimes. Context engineering helps to improve the judgement of the agent to do less stupid shit less frequently.

Planning

On its surface, this is relatively straightforward - use a large language model (LLM) to make a plan to solve the problem. There is another aspect to planning, however, which is the actual orchestration of the workflow, knowing when to call tools, knowing when to call deterministic functions, and knowing the scope. In the context of our example, planning might include:

  1. What information is required before we can make the specific interaction plan to correct a misapplied payment and handle the associated late fee.
  2. An overall flow for handling this specific type of problem - classify problem at sayment related, verify identity of specific borrower, gather additional payment history data.
  3. The specific plan for the actual interaction - determine correct application of funds, apply funds, determine what to do with late fee, make recommendation to human in the loop.
  4. Take specific set of action steps to resolve problem.

So there are multiple aspects to planning - workflow orchestration (which may be a combination of deterministic and probabilistic steps), the actual recommended plan, the validated plan, and the resolution plan.

Memory

It's funny to come from a field where we have sought after statelessness for so long. Not to get too technical, as I am already way over my skis talking about things I don't understand well, but service-based architectures (the wave before now) was all about resiliency and moving away from monoliths. The idea that we can have one part of the system break, and the rest of the system stay in tact.

A stateless service tends to scale well, fail predictably, be easier to understand, and minimizes coordination complexity. This is helpful for us mere mortals. So where did the state actually go? It went into databases, event logs, workflow engines, and... humans. We had state, it was just distributed. Now we have this whole new way of making systems (the agentic way), and the key distinction is understanding and evolving state. And this is one of the critical parts of memory.

Back to our example of the agent that corrects misapplied payments and waives late fees. What does memory mean? It means:

  1. We know what came before, the payments that were made, the fact that there was or was not a late fee applied.
  2. We know what happened in the initial interaction, why the customer contacted us, the channel, how they "usually" interact, whether they were frustrated.
  3. We understand the original plan and the adapted plan based on the feedback from the human in the loop. Perhaps the original plan waived the late fee and the HITL overrode that and ultimately the agent did not waive the fee.
  4. The next time this consumer contacts or there is another "related" situation, the agent knows about the misapplied payment that happened in this event context.

So memory can be short term, long term, in the interaction, apart from the interaction, and can apply to the future or not. For those of you using Claude Code, this is a critical part of why Claude "compacts conversations". This is taking a cet of context that is too large or inefficient to pass each time and keep "in memory" that is subsequently compacted for continuous use. That compacted context is not typically available int he next session unless it is stored elsewhere, which is a conversation for another article.

Tools

Looking at Claude Code, there are hundreds if not thousands of tools. Claude gives you clues to this when it is thinking as it notes the number of tools it used in a particular step. i've started to pay attention to this. Tools are just what they sound like - a tool is any capability the agent can call to do something in the world beyond just generating text.

In our example context, tools might be:

  1. Get payment history.
  2. Generate summary for human in the loop.
  3. Waive late fee.
  4. Store document.
  5. Create customer notification.

There can be lots and lots of tools, but keep in mind that the more tools there are, the more ways there are that things can do sideways with your agent. Yes, claude code has hundreds or thousands of tools. Anthropic's overall post-money valuation was reported at $183B in September 2025. It's safe to say they have way more money and talent to develop Claude Code that we do for whatever agents we are creating. Stay humble guys, keep it small and iterate.

Adaptivity

Two types of adaptation we are talking about here - adapting to environmental changes (new data being a big one) and adapting the human in the loop (HITL). I might also throw in adapting to predefined stopping points - sometimes called "stop hooks" - although that's not really adaptation, that's just executive a plan when a set of plan conditions are met. In any event, adapting to the environment and new or changed context is a critical part of agency. Something changes, and I need to change - I had a plan, now I need to make a new plan. On my own. Without being asked. Or, I need to stop and check in to make a new plan (this is a stop hook).

And then there is human-in-the-loop (HITL) adaptivity. Making a plan, having tools, and executing a plan is really meaningless if it's the wrong plan. In fact, it's worse than meaningless, it can be actually destructive and create harm for humans. Yes we want to provide good context engineering to prevent the wrong plan from being executed, and also, especially in mortgage, we will continue to have humans in the loop for many use cases for many years.

In our example, we might adapt to:

  1. New information - another payment comes in, a check bounces, we receive a bankruptcy notification.
  2. Changed information - turns out the loan type is actually different than what we thought, changing the rules for what we have to do.
  3. Human policy latitude - maybe the system says we shouldn't waive the late fee but in fact the human authority makes a different decision. We will, therefore, need to take a different set of actions (make and execute a new plan).

And it goes on an on. I find the best way to really see all this in action is to, well, make it happen. I encourage each of you to create your own agent, or reach out to me and I'd be happy to help you get started on your journey. And naturally, feel free to work with various partners in the industry (including us and the Phoenix Burst team) who already have agents available to do useful things in mortgage.

If you are looking for education, the Mortgage Bankers Association training calendar for AI is already up on their website, and we will be focusing a lot more on agents this year. Hope to see you in a class sometime soon. Happy building!

featured insights

January 1, 2026

Why 2026 could be the year mortgage AI delivers

By Spencer Lee | Featured on National Mortgage News, January 2026

While the first few years of the mortgage industry's relationship with artificial intelligence have been characterized by new gadgets and fear of missing out, 2026's themes might center on how companies manage to catch up with its game-changing potential.

If or when they do, the industry should start to see standardization and scalability that help it achieve long-talked about, but difficult-to-achieve, ambitions, including greater customer satisfaction, simplified underwriting and faster closings.

The businesses that can standardize certain processes with the assistance of AI will set themselves off from the pack, according to technology executives. The first hurdle that needs to be cleared for breakthroughs to happen, though, is to achieve widespread use within an organization.

"What we're seeing is that it's the people that are setting the pace. It's not the tech that's setting the pace, because there's this huge gap between what we can actually do with generative AI and how the people are using it, and this gap is only going to grow as models get smarter," said Tela Mathias, managing partner and chief technology officer at mortgage consultancy firm Phoenixteam.

"Finding the way that human beings who are ultimately sitting in the chair —  engaging with the homeowners, solving problems, uncovering and understanding — and optimizing that relationship with their everyday AI is going to be critical in 2026 because what we're seeing is that we can eat everything on the buffet," she added.

The cost of getting started with AI isn't as burdensome as it was with some past technologies, according to Prasad Kodibagkar, chief technology officer at mortgage industry law firm Polunsky Beitel Green and a former executive at Mr. Cooper and Wells Fargo Home Lending. While costs shouldn't be an impediment, a shortage of qualified users is also holding companies back, he concurred.

Built for efficiency and compliance

Offering automated loan onboarding, escrow management and credit reporting, Dark Matter’s servicing platform helps servicers deliver exceptional...

Partner Insights from Dark Matter Technologies

"It's not a skill set that's easily available," Kodibagkar said. "The price point, the entry point is very low, but that doesn't mean that you can just plug it in and start using it."

Although some of the barriers still present challenges, they are not keeping ambitious companies from capitalizing on the advances AI has already brought with it and setting lofty goals for themselves and their peers.

The road to faster mortgage underwriting

Generative and agentic artificial intelligence are creating noticeable progress toward longtime goals in originations that have been discussed for years, according to Lower President Adam Wiener.

"Everyone is kind of poking around the edges of some really big breakthroughs," he said. "We all share this kind of dream of a digital mortgage that can be originated on demand, at a low cost with a delightful experience for both borrowers and originators."  

Even as progress has been made to shorten origination times over the past decade, the mortgage industry has served as a prime example of the adage: one step forward, two steps back. Over that same time period, the cost of loan production has actually increased.

"I think now we're finally at a moment where technology can reverse that," Wiener said.

What is key to pushing the industry toward greater cost savings is the ability for AI to accurately "read" the full variety of documents needed during underwriting that past tools could not. Often saved in image or PDF forms, the documents related to credit, income and assets are continuing to be digitized, and AI has taken over or eliminated some of the mundane tasks once required to verify hard-to-decipher data.    

"The ability to process semistructured data that's stored in photos or documents or handwritten notes has just jumped up to the next level. As a result of that, you will be able to digitize essentially all of the borrower data in a structured way," Wiener said.

With the data at hand, an AI underwriting tool can help evaluate borrower assets with overlaid guidelines "and almost condition out that file like a pre-underwrite," said Jesse Lopez, vice president of process improvement at Mortgage Solutions Financial.

"For those loan officers that are newer to the industry, I think AI can be a great supplement to structure loans," Lopez noted.

The insights AI can provide are only getting better, Wiener also said.

"You get faster decision-making, clearer eligibility criteria, and just shorter-term times across the board, I think that's one kind of main trend that we're going to see in 2026," he said.

As development moves forward, borrowers and lenders should plan to start seeing expedited closing times, an aspect of the mortgage experience that has been criticized by customers for years.  

For "easy" loans with highly qualified buyers who can afford a 20% down payment, or refinances with strong underlying fundamentals, "you're going to see speeds that are extremely fast," Wiener predicted.

"Call it sub 15 days for those kinds of originations on a regular basis."

Creating a document standard

What's also proved challenging for the mortgage industry in the past is the sheer number of changes, even minor tweaks, that could appear in a single document during the entire life of a loan. The changes may raise questions about which version is to be relied on when consulting it in the future.

"We are supposed to make decisions at every facet of this based on what version of truth?" Kodibagkar asked, while adding that AI has a role to play that will bring about uniformity.

Artificial intelligence is helping create a standard for particular documents in a single loan file, which is the type of Holy Grail goal mortgage development should aim for. "The tech is there," Kodibagkar added.  

The standardization and analysis AI can already produce, though, also stands to realign staff structure among lenders and brokerages, according to Lopez.

"There's enough technology out there, as it stands today, that you can put things in place using AI that can dramatically either reduce your processing staff or allow you scalability without having to hire more processing staff because you can take those mundane tasks off of their plate."

What's in store for servicing

AI's capabilities to analyze data quickly and create customized opportunities for the borrower during the origination process likewise carries over to servicing clients and their customers.

Even before the surge of AI, servicing technology had evolved in recent years to move beyond proprietary systems that, in the beginning, ended up creating a high degree of technical debt. On the other hand, newer software easily integrates with AI and other platforms, according to Cornerstone Servicing President Toby Wells.

"It's not the box that you got five years ago — and that's it," he said. "There's constant enhancements, and that customization flexibility has improved per year and continues to improve."

The new approach to servicing system development makes any automation upgrades or enhancements, including AI, straightforward. "It allows you to customize and build your unique requirements on a client-by-client basis. That is a part of the system configuration, so it's more module-built systems as opposed to that loan accounting, mainframe-task oriented linear system," Wells explained.

With the addition of AI to the process, the amount of customization that can be applied when working with borrowers is ramping up quickly. While tailoring solutions for servicing customers was already possible through existing automations that could implement customized waterfalls and other loss mitigation options, the data granularity available through artificial intelligence is adding new strategies to help borrowers.

AI's benefit to servicers appears first and foremost in call centers, and like in the rest of the mortgage industry, the tools are set to get better in 2026. Fast analysis of conversation transcriptions and summarization creates savings and better outcomes from investors, Wells noted.

With today's AI abilities, the amount of data a call agent has available to them helps provide clarity and possible scripting suggestions almost instantly, regardless of how complicated the borrower scenario.

AI scripting assistance "moves you in the right direction," according to Wells.

"I think some of those ecosystems and tools will start leveraging AI fairly quickly to be able to identify topics to be discussed with a customer. A customer may call you about 'X,' and you want to answer that question, but there might be a series of things coming up that you want to touch base with that customer," he said.

"At the end of the day, what a servicer really is, is a data company. We are moving large amounts of information."

As the mortgage industry moves beyond the foundational stage of AI development, companies will find what they might use it for today as just scratching the surface of its potential.

The potential won't translate into results without an all-in approach from companies, though, Mathias said. Adoption needs to come from the top down to fully understand how game-changing AI is.

"I've never seen a technology before where it is so necessary for every level of the organization to have their fingers on the keyboards," Mathias said,

"What I see in 2026 is a much bigger push towards true workforce enablement and really getting under the covers with what it is that we have to do to enable the workforce to take advantage," she said.  

featured insights

December 13, 2025

Freddie Mac Bulletin and Executive Order Implications on Mortgage AI

December is not even halfway over and already we have two big developments on the AI housing policy front. I'm sure many of us have seen the buzz on the Freddie Mac AI bulletin and the executive order. Here's my take on what it means for us in mortgage.

Freddie Mac Bulletin Interesting Things

The first kind of interesting thing is the lack of mention of generative artificial intelligence. This implication here is the more traditional definition of artificial intelligence applies - my definition of artificial intelligence is that very broad field of computer science focused on enabling machines to perform tasks that typically require human intelligence.

One of the most interesting things about the bulletin is that that is does NOT say anything about use cases. It does obligate seller/servicers to furnish documentation on the types of AI/ML used, as well as the "purpose and manner" for such use - but it falls short of saying the industry must provide a use case list. Look here for a mortgage AI use case list. Look here and here for a description of AI types, especially those prevalent in mortgage. This is interesting to me because I think the use case list is kind of where the rubber meets the road in mortgage AI, I also think it's one place for differentiation in a mortgage companies' AI strategy. I appreciate that this was not called for as I think it's very special to each organization.

Another very interesting thing about the bulletin is the extent to which Freddie Mac's own technology will show up. Freddie Mac, of course, makes extensive use of AI/ML systems in its own stack so naturally this will show up in the documentation. I wonder if this means that seller/servicers will also have to test Freddie Mac technology for adherence to Freddie Mac's requirements.

Also interesting is the following statement "[l]egal and regulatory requirements involving AI are understood, managed, and documented" - I really wish this was easy. We have what we have "always" had to do from a process integrity and data privacy perspective, then we have the patchwork of state requirements (not new to us in mortgage). There is no decoder ring for the rules, and what I generally say is that we have to anticipate where the puck is going to go. I'm happy to see Freddie Mac give us a less gelatinous sense of the puck's destination. Continue reading for my take on the executive order that could change all this.

Article content
To the left of the December 2025 line is what I taught in my classes before last week.

I found the use of the term "trustworthy AI" as opposed to "responsible AI (RAI)" interesting. Upon further review this certainly harkens back to the NIST definition, which is very close to the RAI framework we teach in our classes. I will start using this term and framework instead. In terms of the trustworthy AI framework defined by NIST, please keep in mind that guardrails and evaluations are a central foundation of a good implemented framework.

Guardrails are technical and non-technical measures implemented to prevent unsafe, unethical, and unreliable use in production environments. Evaluations measure how well generative AI performs against expectations, helping to ensure outputs are accurate, relevant, and aligned with user goals.

Article content
The center of the cheeseburger is the NIST definition of trustworthy AI, the buns and sidebars are the author's own edition. A trustworthy AI framework is unevidenceable (is that a word?) without guardrails and evaluations.

The last thing in the bulletin I found interesting was the segregation of duties language. I'll be honest, I hadn't really thought about this before. In context it makes sense. This language is trying to prevent a very specific failure mode. Specifically, it's designed to help ensure that the people who benefit from using an AI system are not also the people who define the risk, “measure” the risk, and sign off that the risk is acceptable. Makes sense.

Specific Steps Mortgage Companies Should Take to Comply with the Freddie Mac Bulletin

  1. Incorporate your AI/ML uses and applications into your policy and procedures. Create a matrix articulating same so you can provide it to Freddie Mac upon request. I would also strongly suggest creating or updating your use case matrix. It's not specifically asked for but it's a good practice. Take a risk based lens (add that as a column in your matrix).
  2. Review current governance process for AI/ML systems and ensure you have the right operational structures in place for accountability and prevention of conflicts of interest.
  3. Get your guardrails and evals in place. I cannot emphasize this enough. This is the key to trustworthy AI. We do not trust the foundation model providers and AI vendors to just "take care of it" for us. We must verify. The accountability expected (and expressed in the form of indemnification) is not really new, as always as a seller/servicer - you are responsible. As the former CFPB director said "there is no special exemption for artificial intelligence".
  4. Get your AI controls defined and risked mapped. This is central to having a validateable control and security framework that you should already have.
  5. Monitor (and store evidence of that monitoring) your AI systems. This will be trick for those of us using foundation models (so basically all of us). I'm unsure what the expectation is from Freddie Mac on the testing and monitoring required for things like Microsoft CoPilot and enterprise deployments of foundational models. Is the expectation that we will do our own testing of these platforms for data poisoning and adversarial inputs? That's a pretty heavy load for anyone really, and requires a pretty hefty dose of technical skill in addition to access that won't be given.
  6. And finally, I'd suggest reaching out to Freddie Mac to discuss the implication of this policy and how you intend to implement it. It's a great group out there and I'm sure they would be happy to engage.

December 11 Executive Order Interesting Things

Definitely the most interesting thing about the executive order (to me, anyway) was the statement that the administration intends to "initiate a proceeding to determine whether to adopt a Federal reporting and disclosure standard for AI models that preempts conflicting State laws". Personally, I would welcome a set of requirements for appropriate use of AI systems. Right now, it's really hard to know what to do. Anticipating where the puck is going to go puts a major damper on innovation. Many companies are paralyzed by the lack of clarity, which stymies creativity. I don't think it would be so bad to have a decoder ring. I acknowledge that I may regret this statement in the future.

Also interesting to me is the extent to which these two sets of guidance were released in coordination with each other. I have to think they were reviewed in concert. One does not conflict with the other, but if the Freddie Mac bulletin had been produced by a state government, I wonder how it would be reviewed under the executive order.

Specific Steps Mortgage Companies Should Take to Comply with the Executive Order

Nothing really to do here except watch and wait. Of course, it's a great idea to keep the lines of communication open with your state examiners and regulators, see what they are doing and thinking about it. Until something changes, there are about 150 distinct AI-related laws, ordinances, and legislative proposals out there that we should understand and determine for ourselves if they apply to us.

By Tela G. Mathias, Chief Nerd and Mad Scientist at PhoenixTeam, CEO at Phoenix Burst

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Artificial Intelligence

Leading When Digital is Cheap and AI Slop is Everywhere

January 20, 2026

Artificial Intelligence

How to Get Agents with Agency

January 6, 2026

Artificial Intelligence

Why 2026 could be the year mortgage AI delivers

January 1, 2026

Artificial Intelligence

Freddie Mac Bulletin and Executive Order Implications on Mortgage AI

December 13, 2025

Artificial Intelligence

Eleven Reasons Why the Mortgage Industry Isn't Further Along with GenAI Adoption

December 8, 2025

Artificial Intelligence

Looking for ROI in All the Wrong Places

November 17, 2025

Artificial Intelligence

What does the infamous "MIT study" really mean to us in mortgage?

October 28, 2025

Our Company

Blue Phoenix Awarded $215 Million VA Loan Guaranty DevSecOps Contract | A PhoenixTeam and Blue Bay Mentor-Protégé Joint Venture

October 3, 2025

Artificial Intelligence
Our Thoughts

Ten Not Very Easy Steps to Achieve AI Workforce Transformation

September 29, 2025

Our Thoughts

MISMO Fall Summit Recap: Our Take on the Summit, AI, and the Road Ahead

September 25, 2025

No items found.

Towards Determinism in Generative AI-Based Mortgage Use Cases

September 22, 2025

Our Company

PhoenixTeam Achieves SOC 2 Compliance, Strengthening Security and Trust in Its Phoenix Burst GenAI Platform

September 9, 2025

Artificial Intelligence

My Journey with Claude Code and Running Llama 70b on My Mac Pro

September 4, 2025

Our Company

Tela Mathias recognized as a 2025 HousingWire Vanguard

September 2, 2025

Our Thoughts
Artificial Intelligence

Top Ten Insights on GenAI in Mortgage

August 25, 2025

Our Company

PhoenixTeam Awarded $49M Contract to Modernize USDA’s Guaranteed Underwriting System, Expanding Rural Homeownership

August 18, 2025

No items found.

Case Study: The Messy and Arduous Reality of Workforce Upskilling for the AI Future

July 28, 2025

Artificial Intelligence

What is uniquely human? AI impacts on the workforce.

July 7, 2025

Artificial Intelligence

251-Page Compliance Change in Hours, Not Months

June 20, 2025

Our Thoughts
Artificial Intelligence

The Medley of Misfits – Reflections from Day 2 at the AI Engineer World’s Fair

June 5, 2025

Our Thoughts
Artificial Intelligence

AI Engineer World’s Fair: What We’re Seeing

June 4, 2025

Artificial Intelligence

Departing from Determinism and into the Stochastic Mindset

May 30, 2025

Artificial Intelligence

The Agents Are Here and They Are Coming for our Kids

May 6, 2025

Our Company

Built for What’s Next: Welcome to the New PhoenixTeam Website

April 29, 2025

Artificial Intelligence
Our Company

Phoenix Burst Honored with MortgagePoint Tech Excellence Award for GenAI Compliance Innovation

April 7, 2025

Artificial Intelligence
Our Thoughts

From Trolling to Subscribing – An Alternative to Compliance Insanity

March 10, 2025

Artificial Intelligence
Our Thoughts

Supercharge LLM Performance with Prompt Chaining

February 24, 2025

Artificial Intelligence
Our Thoughts

From Program Management to Program Efficiency and Innovation

February 20, 2025

Artificial Intelligence
Our Thoughts

The Evolution of Service Level Agreements: Why AI Evaluations Matter in Mortgage

February 12, 2025

Artificial Intelligence
Our Thoughts

An Impassioned Plea for AI-Ready Mortgage Policy Data

February 3, 2025

Artificial Intelligence

The Role of Mortgage Regulators in Generative AI

January 28, 2025

No items found.

PhoenixTeam Announces Partnership with Mortgage Bankers Association to Offer GenAI Education for Mortgage Professionals

January 13, 2025

Artificial Intelligence
Our Thoughts

Calculating AI ROI in Mortgage: Strategies for Success

December 16, 2024

New Contract
Our Company

PhoenixTeam Awarded $5 Million Contract for HUD Section 3 Reporting System Modernization

November 25, 2024

Artificial Intelligence
Our Thoughts

The History of Artificial Intelligence in Mortgage

November 21, 2024

Artificial Intelligence
Our Thoughts

Adoption of GenAI is Outpacing the Internet and the Personal Computer

November 8, 2024

Artificial Intelligence
Our Company
Phoenix Burst

PhoenixTeam Launches Phoenix Burst — A Generative AI Platform to Accelerate the Product and Change Management Lifecycle

October 22, 2024

Artificial Intelligence
Our Thoughts

Application of Large Language Model (LLM) Guardrails in Mortgage

October 17, 2024

Artificial Intelligence

Where is GenAI Going in Mortgage?

September 9, 2024

Recognitions
Our Company

Tela Mathias Wins HousingWire's Vanguard Award

September 4, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Episode 9 | The Lindsay Bennett Test: A Live Assessment of Phoenix Burst with a Product Leader

July 19, 2024

Artificial Intelligence
Our Thoughts

A Practical Approach to AI in Mortgage

July 18, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Episode 8 | Inside Phoenix Burst: Transforming Software Development with AI

July 11, 2024

Accessible AI Talks
Our Thoughts
Phoenix Burst

Accessible AI Talks | Episode 7 | Leading a Gen AI Team to Production

July 10, 2024

Artificial Intelligence
Our Thoughts
Phoenix Burst

AI Reflections After Getting Lots of Feedback

June 21, 2024

Our Company

PhoenixTeam Ranks Among Highest-Scoring Businesses on Inc.’s Annual List of Best Workplaces for 2024

June 18, 2024

Our Thoughts

MISMO Spring Summit 2024: Key Insights and Takeaways

June 18, 2024

Our Community

Join Us in Making a Difference: Hope Starts with a Home Charity Drive

June 3, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts

Freeing the American People from the Bondage of Joyless Mortgage Technology

June 3, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Part 6 | The Role of AI in Solution Design

May 25, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Part 5 | The Problem of Product Design

May 22, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Part 4 | The Problem of Requirements

May 8, 2024

Artificial Intelligence
Phoenix Burst

What is a value engineer?

May 7, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Part 3 | Problem of Shared Understanding

May 7, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Part 2 | The Imagine Space and More

May 6, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Part 1 | Introduction with Guest: Brian Woodring

May 6, 2024

Artificial Intelligence
Our Thoughts
Phoenix Burst

Storyboards Matter: Three Insights for Using AI to Accelerate Product Design and Delivery

May 3, 2024

Our Company

PhoenixTeam Designated as One of MISMO's First Certified Consultants, Shaping the Future of Mortgage Industry Standards

April 24, 2024

Artificial Intelligence
Our Thoughts
Phoenix Burst

The Two Rules of Gen AI

April 18, 2024

Artificial Intelligence
Our Thoughts
Phoenix Burst

Three Simple Steps to Kickstart your AI Journey Today

April 16, 2024

Artificial Intelligence
Our Thoughts
Phoenix Burst

The Peanut Butter and Jelly Sandwich AI Experiment

April 16, 2024

Artificial Intelligence
Our Thoughts
Phoenix Burst

Starting Our AI Journey

April 10, 2024

Our Company

PhoenixTeam CEO and COO Make Inc.’s 2024 Female Founders List

April 10, 2024

Our Company

PhoenixTeam has earned its spot on Inc. 5000 Regionals: Mid-Atlantic

March 4, 2024

Our Company
Our Thoughts

Key Insights and Takeaways from MISMO Winter Summit 2024

January 29, 2024

Our Company
Our Thoughts

Leader of the Year Interview with Jacki Frazer

January 18, 2024

Our Company
Our Thoughts

Why Phoenix - Shawn Burke

December 28, 2023

Agile
Our Thoughts

PhoenixTeam at Agile + DevOps East 2023: Key Insights and Takeaways

November 17, 2023

Spotlights
Our Company
Our Thoughts

Why Phoenix — Vicki Withrow

November 3, 2023

Our Thoughts

Breaking Barriers: The Extraordinary Woman Who Redefined Workplace Equality

October 20, 2023

Our Company

PhoenixTeam: In the Arena

October 12, 2023

Our Company

PhoenixTeam: In the Arena

October 12, 2023

Agile
Our Company

Defining the Word “Done” The Phoenix Way

September 14, 2023

Product
Our Company

PhoenixTeam Approved as DOD SkillBridge Partner to Help Active-Duty Military Service Members Re-Enter Civilian Workforce

August 18, 2023

Recognitions
Our Company

PhoenixTeam featured on Inc. 5000 List of America’s Fastest-Growing Private Companies for the 4th Consecutive Year!

August 15, 2023

Agile
Our Company

Military Veteran’s Transition from Active Duty to Civilian Life as Lean-Agile Methodologist and Coach

August 1, 2023

Product
Recognitions
Our Company
Our Work

Veteran Founded Technology Venture Blue Phoenix Expands Reach with GSA IT-70 Award

July 24, 2023

New Contract
Our Company
Our Work

PhoenixTeam Begins New Partnership with HUD for FHA Catalyst

June 9, 2023

Our Company

PhoenixTeam is Excited to Announce Becky Griswold as its Newest Partner

June 1, 2023

Product
Our Work

PhoenixTeam proves value realization begins with product discovery

March 30, 2023

New Contract
Our Company
Our Work

PhoenixTeam Strengthens Partnership with U.S. Department of Agriculture

March 29, 2023

Recognitions
Our Company

PhoenixTeam Featured on 2023 Inc. Regionals Mid-Atlantic for Third Consecutive Year

February 28, 2023

Recognitions
Our Company

PhoenixTeam Announces 2022 Annual Company Award Winners!

January 30, 2023

Recognitions
Our Community

PhoenixTeam Ranks #15 on Washington Business Journal’s 2022 Fastest Growing Companies

October 21, 2022

Recognitions
Our Community

Fortune and Great Place to Work® Rank PhoenixTeam #29 2022 Best Workplaces in Technology™

September 7, 2022

Recognitions
Our Company

PHOENIXTEAM FEATURED ON INC. 5000 LIST OF AMERICA’S FASTEST-GROWING PRIVATE COMPANIES

August 16, 2022

Recognitions
Our Company

Fortune and Great Place to Work® Rank PhoenixTeam #53 2022 Best Medium Workplaces™

August 8, 2022

Recognitions
Our Company

Fortune and Great Place to Work® Rank PhoenixTeam #29 2022 Best Workplaces for Millennials™

July 18, 2022

Recognitions
Our Company

PhoenixTeam Ranks Among Highest-scoring Businesses on Inc. Magazine’s Annual List of Best Workplaces for 2nd Consecutive Year

May 10, 2022

Recognitions
Our Company

PhoenixTeam Featured on 2022 Inc. Regionals Mid-Atlantic for Second Consecutive Year

March 15, 2022

Our Community

PhoenixTeam Goes Pink for Breast Cancer Awareness Month

October 15, 2021

Our Company

PhoenixTeam shows up strong at the MISMO Fall 2021 Summit

October 5, 2021

Recognitions
Our Community

PhoenixTeam makes the 2021 Inc. 5000 list for 2nd consecutive year!

August 17, 2021

Salesforce
Our Work

PhoenixTeam is Now a Salesforce Partner!

June 30, 2021

Our Company

Introducing the newly designed PhoenixTeam Website

June 29, 2021

Recognitions
Our Company

PhoenixTeam Ranks Among Highest-Scoring Businesses on Inc. Magazine's Annual List of Best Workplaces for 2021!

May 12, 2021

Our Company
Our Thoughts

The Importance of Continuous Learning for Team Members

April 20, 2021

Salesforce
Our Work

PhoenixTeam’s Three Pillars to Successfully Implementing Salesforce

April 6, 2021

Salesforce
Our Work

PhoenixTeam’s Three Pillars to Successfully Implementing Salesforce

March 9, 2021

Salesforce
Our Work

PhoenixTeam's Three Pillars to Successfully Implementing Salesforce

February 22, 2021

2

Accelerate Your Operations with AI-powered Expertise

Let’s Talk

Stay Connected

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
© 2025 PhoenixTeam. All rights reserved.   |   Privacy Policy   |   Terms of Use