By Tela Gallagher Mathias
We held our first ever public-private sector AI exchange yesterday, and we opened with a question – what is it that makes humans uniquely human? Answers ranged from “procreation” to “cooking”, to “empathy”. This question is relevant because, as a refresher, AI is a field of computer science focused on enabling computers to perform tasks that typically require human intelligence. Human intelligence is our ability to sense, understand, and create. So, the question of “what is humanness” matters more acutely now than ever, as these innately human qualities are what will be most valuable in the workforce in the future.
In doing research about the historical impacts of disruptive technology on the workforce, I was very encouraged and posted about that earlier this week. The bottom line is that with every major disruption since the 1750s, including both industrial revolutions, significant numbers and types of new jobs were created. For example, in the first industrial revolution water-power spinning machines were invented, igniting the shift to factory textile production. This created new labor classes concentrated in mills and sparked the modern wage labor system.
In a recent MIT study that evaluated 80 years of Census data, researchers found that literally 60% of the jobs we have today did not exist in 1940. Not only that, but many of these jobs wouldn’t have even made sense at that time (I’m looking at you “content creator”). For context, tattooer became a job recognized by the US Census in 1950, software engineer in 1970, conference planner in 1990, and solar photovoltaic electrician in 2018.
This is really encouraging to me but does rest somewhat on hope as a strategy. Every single time technology has disrupted our lives, jobs have been created, and we as a people have survived. Therefore, it is likely we will survive this one. I also know that overall, if gross domestic product (GDP) is a measure of prosperity, we are a wildly more prosperous company than we were. From $3B in 1790 to $23T today.
Adjusted for inflation, U.S. output is roughly 6,400 times its 1790 level and has kept a long‑run trend of about three percent real growth per year despite wars, panics, and recessions. In addition, per-capita prosperity multiplied by about 30 times. Real GDP per person rose from about $2,000 (in today dollars) in 1820 to more than $70 000 today, illustrating how sustained innovation converts into living‑standard gains when paired with education and market dynamism.
What I don’t know, however, is what negative consequences this introduced to the people that were in the jobs that become obsolete. And I don’t know what happened to the young people who were attempting to enter the job markets in the time just after the disruptions.
We will see significant impact across virtually all major economic sectors. No matter the sugar coating by the major technology companies and the Silicon Valley attitude, jobs will be eliminated. We will continue to see job loss due to robotics. We will continue to see job loss due to generative AI in create fields. We will continue to see job loss due to the continuous accelerated pace of automation.
Education will be very slow to adapt curriculums and learning approaches, with public education trailing far behind independent schools. This group of elementary school kids that are matriculating now are especially at risk of falling behind. They had their learning radically disrupted first by COVID, and they are now the last kids to be born after ChatGPT. This means the ability to prepare this generation will require an even more significant investment by parents. With so much of the responsibility having to happen in the home, preparing these kids will fall to the parents.
I think Jensen Huang puts it the best. At the Milken Institute Global Conference, Huang explained that the disruption from AI is “not simply about outright job loss through automation but about a growing divide between those who harness AI as a tool and those who do not,” highlighting the risk of inequality between the so-called “AI-skilled elite” and everyone else. Out of the global population of about eight billion, only about 30 million people are proficient in programming and advanced AI technologies. That is less than 0.4% of the global population. This small tech-fluent cohort wields disproportionate power with AI, while many others could be left behind.
So, I think the message for current workforce is to adapt or fall behind. Many of the big tech companies at least claim that they are significantly investing in reskilling and training current workforces with the increased reliance on machines and decreased reliance on the human workforce.
Gartner had an eclectic, if somewhat silly and self-promoting at times, perspective on future work trends in 2025.
There is no doubt about this one. Add to it the mass exit of federal employees and we are exacerbating this phenomenon.
We have definitely done this at our company, and I have advised companies on what an AI-first, or at least AI-ready organization looks like. The job names are changing. New jobs created, at my company we have “value engineers” and “evaluation specialists”. And, of course, my title has changed – I call myself the chief nerd and mad scientist as that seems to be what fits what I do the best.
This is a silly and Gartner self-promoting one for me. So “nudge theory” suggests that subtle, indirect suggestions or environmental changes can influence people's choices and behaviors without restricting their freedom of choice. For example, placing healthy foods in the pantry at kid eye level. From an employer perspective, this is the idea that we consider when a text or an email is better, when we should call. Certainly, Tanya Brennan is the master at this. I find the idea that we will have technology tell us when to communicate and how to be offensive and I’m not into it.
I had to look this one up. This trend reflects a growing workplace dynamic where workers increasingly trust AI systems more than human managers, especially in areas related to fairness, transparency, and objectivity. I think the alliteration is a bit cheesy, but I can see where this is coming from. My brother, Gil Gallagher, who is the middle school director at The Field School, told me about a trend towards (as well as resistance to) algorithmic based grading. I wonder if this could be fairer than humans’ subjective evaluation. We see this in my industry as well, although there is a lot of skepticism towards probabilistic algorithmic based decisions.
This one highlights the growing need for companies to establish clear, principled boundaries between acceptable AI use and deceptive or unethical behavior. Obviously, this will play out in education. What is considered cheating now will just be the ways things are done in a few years, maybe sooner. I saw this myself in thinking about corporate testing. I recently had to take a test to validate my cyber awareness – is it cheating to use ChatGPT to confirm my answers to some of the questions? Is it cheating if one employee uses genAI to help them at work, but another doesn’t?
This one is interesting. If we look at traditional diversity, equity, and inclusion (DEI) efforts, and its focus on the numbers, we saw many organizations dissatisfied with the results, and, of course, the major anti-DEI backlash that is now playing out. I myself have wondered about the effectiveness of our DEI program, and the effectiveness of efforts I have provided significant financial support for in the recovery community. What if instead we focused more on how we made people feel, and less on how many of them there were? Harder to measure, certainly, better? I think so.
I have seen this, and, in fact, embraced it. We put a serious premium on experimenting and failing at our company. We have a team of value engineers, and we ask them to try out all the new stuff, and struggle for a while, even if this means failing a few times. We do, however, encourage asking for help. Yes, the struggle is part of the process but so is learning from those who have gone before. I am seeing this show up in the industry as using a chainsaw to cut a tissue. Everything doesn’t need an agent. In fact, there are many, many mortgage use cases that really shouldn’t use an agent at all. Do you need high precision? Do you need complete transparency? Do you need it to work 100% of the time on 100% of the cases. Yeah, maybe not an agent right now.
This one reframes employee isolation as more than a personal concern. It is strategic and operational liability that can erode team performance, innovation, and retention if left unaddressed. I found a study recently that indicated about 26% of employees in 2024 I think reported that they were happy with collaboration at work, which was DOWN from 31% in 2021 (I can’t find it now or I would link to it, I promise it exists, and this isn’t a ChatGPT hallucination). That’s fascinating. That’s worse than during COVID. I do occasionally get lonely at work, and I have a great partnership with Tom Westerlind and Tanya Brennan, and of course my teams. But when I do get lonely and there is no one around, and none to call, it can be very isolating. Those are the times when I think about maybe going to work somewhere else, and I own the company. (Unabashed Jensen Huang superfan, just make me an offer).
I suppose this one is about an employee uprising about use of genAI at work. For those companies that have still banned it (yes, they are out there), they will see serious dissatisfaction from growing numbers of employees. Those employees are effectively being forced by their organizations to fall behind. In this job market, no-one can afford to fall behind and this failure to get with the genAI times is going to be a real talent drag.
So where does this leave us? I think it leads us back to what makes us uniquely human. That which makes us uniquely human is the differentiator in the AI future, which is really just the AI now. Many of the big tech companies talk about hiring not for technical skill, but for their more human talents. Microsoft, for example, has said very recently that “Microsoft still plans to hire more software engineers than it has today, but it cares more about what makes them human and less about their technical abilities.”
And what does that mean? What are those qualities? They are, and I see this validated time and time again, the ability to lead in uncertainty, creativity, judgement in difficult situations, and the ability to connect the dots. Steve Jobs once said at a new famous commencement speech that “you can only connect the dots looking backwards”, and I have definitely found this to be true.
We make the best decisions we can, with the information we have, relying heavily on intuition and experience. We hope they are the right ones, we how we made them at the right time, and we pray that we made them with the right people. Then, in the fullness of time, more is revealed, and we see the dots that we connected. I don’t (yet?) see that in ChatGPT. We made a brutally hard business decision in 2024 that affected a lot of people. I made it together with my partners, and it really seemed like the right one. It fundamentally changed our company, initially for the worse and in the long term for the better. I wasn’t sure it was right at the time, and only after a lot of pain, a lot of time, and a lot of new data has it been revealed unambiguously to be the right one. Those are really scary, gut-wrenching decisions, I don’t think I would leave them to ChatGPT.