Thought Leadership Essay

AI and the Architecture of Attention: Three Voices on the Future of Thinking

Three Voices on AI Abstract Art

All of our professions are affected, and we're all figuring out what to do with the incredible and growing presence of AI in our day to day. This short reflection is a snapshot in time of how we are thinking about AI and the future of thinking.

The Scaffolding of Learning

The group chat for my co-working space is always active, but on Friday the notifications were so frequent I had to mute them. A professor in our group posed a question to our circle of academics, consultants, and journalists: my PhD student wants to use AI to take notes… I wasn't sure how to respond. She said it was to help her focus more on what I was saying. Of course, the university has no policy or guidance on this. Advice, friends? The chat lit up-an entire room of Gen X women-with strong opinions shaped by our analog childhoods and digital teen/adulthoods.

After 25 years in higher education, and the latter part of my career running a leadership college, watching a PhD student propose AI note-taking left me unsettled. Yes, the tool promises efficiency, a way to capture lectures without losing a thought. But the very act of listening, synthesizing, and wrestling with ideas in real time-the scaffolding of learning-is being outsourced. If technology can do the thinking for students, what becomes of the attention and discipline that genuine learning demands?

Learning isn't about recording facts. It's about engaging and building your own understanding through your own reasoning and experience. Real comprehension comes from confusion, from questioning, from the moments when your mind has to make connections and challenge what it thought it knew. As Tim O'Brien argues in his work on adult developmental theory, growth happens when learners confront assumptions and expand their capacity for complexity-not when content is simply delivered.

Students have always looked for ways to manage cognitive load-cheat sheets, flashcards, recording lectures, requesting slide decks. Note-taking is work, and it matters. We don't get better at something by outsourcing it; we get better through practice.

I'm not calling for rejection-I'm asking how we integrate these tools so they strengthen our capacity to think rather than quietly replacing it.

Drowning in Content

I opened my task list this morning and found 58 new pieces from my SEO team waiting for approval. Fifty-eight. All generated since yesterday.

I'm drowning in content-and it's surprisingly hard to edit or even make sense of it. Most of it was researched or written with AI, and you can feel it. It's the same way my students' essays have started to sound-as I told them in class today-don’t have AI polish your leadership failure diagnosis-I want to hear your voice. There's a rhythm to AI writing I find recognizable: the slightly forced three-word alliteration, the repeated "not just X, but Y" structure, the nice flow-it is sort of hypnotizing and you suddenly notice you haven’t actually understood anything of what you just read.

The strange part? It's not bad. It’s milque·toast-sometimes exactly correct-just un-interesting. That's what makes it hard.

When I try to edit this writing-an SEO article about "how to run effective meetings" or a student essay on how to lead through a particular situation, I struggle to know what to fix. Do I agree with it? Kind of. Would one of my clients find it clarifying? Maybe. Nothing's wrong-nothing is surprising or particularly compelling.

Leadership writing is already challenging because it so easily sounds like self evident statements-"empower your team," "communicate clearly," whatever. Utterly forgettable.

When every sentence sounds ok, spotting what matters gets harder. I am having to retrain my brain to deliberately notice the absence of meaning. To wonder why three words are being used here-when one would do. I’m in the business of training leaders to succeed in high stakes and complex situations-and AI is going to be one of their resources-but how do I train them to engage with the underlying ideas and thoughts being put forward rather than being lulled by the format and ease.

With my students I am experimenting with different formats-how about if they submit their final assignments as a video, a message for a press conference. With the SEO team I'm asking them to send me short bullet points, not full drafts.

I'm hoping to find a space where the thinking happens and interact with it there-rather than have to retrain my brain to make sense of all this generic stuff.

Accuracy Matters

State Capitols run on relationships and information. The mountains of paper handed out in my first few terms as a State Legislator dwindled as specialized information apps, blast emails, and downloads exploded. It was so much information to process: reams of budget spreadsheets and public comments on a multi-billion dollar budget, 300+ pages of daily bills and amendments to digest and vote on, and the data, expert testimony, and constituent and advocate feedback that supported or refuted legislation under development. The amount of information I was expected to understand grew with my seniority, as I became Chair of House Appropriations, and eventually Speaker of the House. Doing the job well required reading, processing, and synthesizing a tremendous amount of information.

For their part in our participatory democracy, my constituents weren't shy, reaching out with their thoughts and concerns based on their lived experiences, and the information gathered from news outlets, word of mouth, and social media. My constituents' access to information and the way they consumed it changed drastically over my 18 years in office, starting before the iPhone was released, and ending just before the first members of the public created ChatGPT accounts.

A tool to summarize information? Yes please! A time saver that could write first drafts of copious emails, constituent updates, and speeches, or tighten up a human-written draft? Hallelujah.

However… When one's workday involves crafting the rules that all of society-theoretically-must live by, the embedded details matter. Precision matters. The difference between "may" vs "shall" buried in an amendment could change my vote. And a legislative update from me in my local paper that was smoothly written but void of the substance and details that impact people's lives might change the vote of a constituent in my politically divided, very "purple" district. While polished, AI is not consistently precise, which has concerning consequences for application in law-making, whether writing and understanding legislation, deciding how to vote, communicating with constituents, or forming opinions as a constituent to reflect back to one's Representative.

While fear tactics and misinformation have long been a part of politics, the ability to generate realistic fiction at such scale as to influence voters is terrifying, rendering AI more threat than tool in policy and political venues. There are so many vectors, foreign and domestic, eroding trust in government. The messy process of democracy needs to respond to people's actual experience rather than artificially imagined content.

The most useful and trustworthy application of AI in a legislative context is where staff in some states are experimenting with it as a drafting tool. Legal staff is often overwhelmed with time-dependent requests for multiple drafts of complex bills, amendments, side-by-side comparisons of different versions, and research from other states and decades, even centuries, of precedent. With carefully crafted instructions, and defined parameters on which material to search, AI could be beneficial for legal staff intentionally trained in how to use it with the utmost integrity and with the expertise to confirm accuracy.

As with many powerful, potentially revolutionary tools, making AI so easily accessible, without guardrails around ethics, training, and application has concerning consequences.

What These Three Perspectives Reveal

Three different contexts, but the same underlying tension: AI excels at producing outputs that look like thinking. The question each of us faces is how to use it with intention-and develop the new diagnostic skills to engage with what it produces so that it is helpful-not undermining.

Share
Email LinkedIn