It’s the first AI election. But it won’t be the last.
Q&A with congressional candidate Alex Bores, the first target of an AI super PAC. “Let's have the debate. Let me know the time and the place,” he told The Up and Up.
*A note to readers: It’s been a busy fall. From the launch of our campus ambassador program to the rollout of our paid-tier, The Up and Up has spent the past few months exploring everything from campus party culture to the role of ChatGPT as a dating guru, how Charlie Kirk changed gen z’s politics, the youth vote in the 2025 elections, and the importance of happiness as part of gen z’s American Dream. We’ve covered a LOT of ground, and are grateful for our community of readers.
Normally, editions like today’s — featuring interviews with next-gen power players — are reserved for paying subscribers. But since it’s Thanksgiving week, we’re sharing this one above the paywall as a small thank-you for being here with us. We hope you enjoy it, and if you find it valuable, we’d love for you to consider upgrading your plan.
We’re taking a pause from publishing this Thursday, but paying members can join us all week in our member-only chat for a gen z culture and politics Q&A. If you’re a member, send your questions our way – we’d love to hear them. If not, you can upgrade here to join in!
The first AI election
The 2026 midterm elections are shaping up to be the first AI election — high profile contests where the role of AI, and how to regulate it, is a hot topic. It will perhaps also be a litmus test that separates candidates in tight races, as lawmakers scramble to catch up to a technology already shaping campaigns, jobs, and daily life.
Case in point: New York Assemblymember Alex Bores, who’s running for Congress in New York’s 12th congressional district, made headlines last week when he became the first candidate to be targeted by “Leading the Future,” an $100 million AI super PAC backed by Silicon Valley players including a16z and executives from OpenAI and Palantir. The group says that Bores’ work on regulating AI to make it more safe would slow innovation and threaten U.S. leadership in AI.
Bores, who’s 35, formerly worked at Palantir and has a master’s degree in computer science. He sponsored New York’s RAISE Act (currently awaiting Governor Kathy Hochul’s signature), which stands for “Responsible AI and Safety Education.” The bill would require what it calls “frontier” AI companies to “write, publish, and follow safety and security protocols and risk evaluations.” It would also permit the NY State Attorney General “to bring civil penalties against large AI companies that fail to live up to these standards.”
Bores insists he’s bullish on AI and that AI leaders are gunning for him because he’s one of the rare politicians who actually understands AI – not that he’s against the technology altogether. He’s leaned into the PAC’s push and made this a focal point of his first few weeks in the crowded primary field. In his view, the backlash is less about the bill’s details than about who gets to set the rules — elected officials accountable to the public, or an industry advancing its own self-interest.
How voters feel: AI safety is popular with the American people. A majority of Americans – 80% – say the government should prioritize “maintaining rules for AI safety and data security, even if it means developing AI capabilities at a slower rate,” according to a recent Gallup poll.
This year, AI has been a focal point of our work at The Up and Up. We spoke to rural students about how it’s changed their education, documented the challenges schools are having creating protocol around it, covered the myriad ways young adults are using it for everything from medical advice, to dating help, and mental health support, and are regularly checking in with our community to learn how AI is reshaping the entry-level job market. As we continue to explore how AI will redefine adolescent life, we’re committed to understanding how next-gen leaders and legislators are thinking about it, too.
In turn, The Up and Up spoke with Bores to hear first-hand how he’s thinking about the technology, and what Leading the Future’s first fight says about the role of AI in the 2026 midterms.
Our conversation has been edited lightly for clarity and brevity.
Give us your AI background. When did you introduce the RAISE Act, and why?
Alex Bores: I have a background in AI, both academically and professionally. I have a master’s degree in computer science with a specialization in machine learning. I spent eight years working in industry. I have two software patents, and those eight years include a year and a half at a startup that was using early versions of transformers, the technology that’s enabled modern LLMs.
I have used this technology professionally, I have studied it, and I’ve also now legislated on it.
The RAISE Act was one of probably five AI bills I did this year, but the one that has gotten the most attention and maybe the most pushback. [It] is putting basic safety standards on advanced AI research at the absolutely largest companies. So think your Googles, your Metas, your Open AI, X AI, Anthropic. That is the set of people who might be covered by the RAISE act. And what it requires is that they have a safety plan and that plan is publicly disclosed and they follow it, that they disclose critical safety incidents – so things that suggest a massive increase in the risk of a harm happening – and that they don’t release models that have an unreasonable risk of causing a critical harm. So basically, [if] their models are failing their own tests, they should not go ahead and release the models. That’s meant to protect against what we saw with the tobacco companies, where they knew that cigarettes caused cancer but denied it publicly and continued to release their products.
You’re coming at AI regulation from a safety point of view, even a national security point of view. But the other side of the AI debate, especially in our listening sessions with young people, is job market disruption for entry level jobs and early career professionals. Is that something you view as totally separate? Is there other legislation you might suggest there?
AB: The additional scrutiny of these models, I think, will help with all the issues we have with AI. But there is a lot that we need to do on the labor market, above and beyond the RAISE Act. We need to be thinking about the ways that [AI] is currently displacing workers, in terms of when it’s taking over a profession that has licensure, how do we ensure we are protecting people’s equity there? We need to make sure it’s held to at least at least as high of standards as humans are held to. We see so often that it is being used to just replace people and put out shoddy work in response. And we need to ensure that we are training people for the jobs that are coming in the future…
If we are not reviving our entire education system, we are not going to be preparing people for what’s actually out there. AI displacement right now is very proximate and already happening.
You see Anthropic saying that they think 50% of white collar jobs could be replaced by AI in the next five years… This is not like some future potential risk. These are things we’re dealing with right now.
You’ve really leaned into this publicly, rather than shy away from it or try to make it a non-issue. How come?
AB: Because the people are on our side. The support for reasonable regulation on AI, reasonable protections, making sure that tech works for people instead of people working for the technology is broadly popular, and that’s why these few AI billionaires and Trump mega-donors see the need to drop hundreds of millions of dollars, because they know they are losing both the substance and the style of the arguments. They know it is not politically popular, and they know that, frankly, they’re wrong about what’s happening.
So if they want to spend a bunch of money to raise the salience of the issues around AI in voters’ minds, I say, let’s have the debate. Let me know the time and the place.
Is 2026 the year of the AI election? What do you make of that and how do you see your role in that conversation?
AB: It’s less that it will be an AI election, and also, similarly, less that there’s like one big, broad bill to do on AI, and more that AI is going to infuse itself into everything.
It’s like saying we’re going to have one bill on the economy. It is just so broad and touches every issue that you care about, and we need leaders that both understand this technology and are willing to do hard fights to stick up for people as it’s moving so rapidly.
But to the second part of your question, on my role… The people that set up the super PAC are not stupid. They are, I think, morally questionable, but they’re not stupid. If they were just trying to defeat me, the most effective way to do that would be to not announce it, to spend the money quietly right towards the primary, and to do it on other issues that might be more popular with the electorate. By announcing it so early, by saying clearly what they stand for, and being so public about it, the goal isn’t just to defeat me. It is to scare any other candidate from these positions and to scare any other state legislator from advocating for these kinds of bills that are broadly popular and desperately needed. And so you know, insofar as I have any role in this, it is to show other candidates and other elected officials that you don’t need to be scared of them, and that by leaning in, that actually will benefit you, and you need to keep advocating for your neighbors.
What’s the biggest misconception about your relationship to AI?
AB: That I am anti-AI.
In fact, I am very bullish on its potential to help humanity. We just need a lot of guardrails for the downside, and we need to have our democracy able to regulate it and make sure it’s used for the best purposes. Tech can and should be a force for good.
I have used it that way in my career, and I’m continuing to use it that way in the legislature. I just published an op-ed on a bill where we used AI to go through old New York state statutes that are outdated, that might give dangerous provisions, dangerous power to the government and to strip those away… This is not a campaign that is saying tech should stop at its current moment, but it is saying that it needs to be regulated and democracy needs to be held above the Silicon Valley billionaires.
Five years from now, what outcome would make you say the US is on the right path with AI and what outcome would scare you?
AB: I’m laughing because I just had like 30 outcomes pop in on the scare me side of it that can go wrong. And that’s kind of the problem, there’s so many ways it can go wrong.
If it’s going right, it is being used to aid the productivity of the workforce in a way that does not displace them. It is being used to accelerate the education of kids by giving them individualized tutoring as a supplement to what they are getting from schools, and is teaching them critical thinking, instead of replacing the critical thinking. The problem of deepfakes is solved, which we already have the technical capacity to through C2PA [Coalition for Content Provenance and Authenticity], and it is in our democracy now, leading to more truth instead of less. And we’ve protected from all the downsides, and so we’re getting benefits like curing diseases, and routinizing the monotony of life, and creating new economic activity.
But that isn’t just going to happen naturally on its own. That requires setting up the right incentives in the economy for the companies to [and that] the technology is benefiting everyone.
The Up and Up’s take: AI is the future, and it’s here to stay. But if we treat it with the same hands-off approach we took with social media, we’ll set ourselves up for another generation-wide fallout.
In all of our listening sessions, gen zers tell us how social media has shaped their lives (sometimes for good, but often for bad) by rewiring how they learn, think, communicate, and interact. Much of the harm they regret could have been avoided, and the good could have been bolstered for better, if we had stronger policy leadership from the get-go, driven by politicians who understood and actually used – from the start – the tech platforms that now run our lives.
We have the chance to not make that same mistake with AI. That’s why AI literacy should really be the *bare minimum* for all future members of Congress. Our generation is counting on it.
Noteworthy reads
America’s Children Are Unwell. Are Schools Part of the Problem?, Jia Lynn Yang for The New York Times
Conservative Young Women Flip the Script: Kids First, Then Career, Rachel Wolfe and Paul Overberg for The Wall Street Journal
My AI boyfriend is Alive, Lila Shapiro for The Cut
Why College Students Prefer News Daddy Over the New York Times, Victoria Le for The Verge
Southern universities reportedly see massive influx of Northeast students seeking sunshine and Greek life, Sophia Compton for Fox News

