At sunrise outside Abilene, the prairie looks the same as it always has—caliche roads, mesquite, and cattle easing toward stock tanks. But look closer and you’ll see new survey stakes and power markers dotting the fencelines. This is where Stargate, the megaproject branded by OpenAI and its partners, is pushing to raise one of the largest AI data-center campuses on Earth—measured not in office cubicles, but in gigawatts.

What “Stargate LLC” actually is
Stargate LLC is a joint venture formed in 2025 by OpenAI, SoftBank, Oracle, and MGX, announced from the White House on January 21, 2025. The pitch: invest $100 billion up front and up to $500 billion by 2029 to secure U.S. AI leadership, with President Trump promising to fast-track energy infrastructure to match the scale. The name nods to the 1994 sci-fi film; supporters have compared the ambition to the Manhattan Project.
Since then, OpenAI and Oracle have unveiled plans to add 4.5 gigawatts of new Stargate capacity in the U.S.—a number so large it’s routinely translated for the public as “roughly the power draw of millions of homes.” (One widely shared estimate equates 4.5 GW to about four million homes.)
Most recently, multiple outlets reported a $300 billion, multi-year cloud deal between OpenAI and Oracle tied to this build-out—one of the largest compute agreements ever discussed in public. It underscores the bet that AI training will devour unprecedented electricity, land, and water for years to come.
Abilene, Texas: where the rubber meets the ranch road
Why Abilene? Land, transmission lines, rail and highway access, and proximity to West Texas energy made Taylor County the first, loudest hub. Financing has flowed through a web of partners: Blue Owl Capital, Crusoe Energy Systems, and Primary Digital Infrastructure are developing the Abilene campus—leased to Oracle—with major debt packages led by JPMorgan. Local reporting and filings point to billions in loans and tax abatements to prime the site.
Business Insider obtained an 85% property-tax abatement contingent on a multi-billion-dollar capital spend, illustrating how counties sweeten the pot to land data-center megaprojects. For rural governments, it’s a calculus: forego near-term tax take to capture construction waves and a higher base value later—if promises materialize.

Hype vs. reality check
Even as shovels hit dirt, messaging hasn’t always aligned. On August 7, 2025, Bloomberg reported SoftBank’s CFO publicly conceding Stargate “needs more time,” reflecting the complexity of wrangling capital stacks, hardware supply, and policy alignment at this scale. At the same time, discrete project financing and construction in Abilene have proceeded under separate developer entities and leases—a reminder that the brand “Stargate” spans multiple structures and partners.
Jobs: how many, and for whom?
Politicians tout six-figure job numbers. Locally, the immediate lift is in dirt-work, steel, concrete, substation builds, switchgear, and specialized trades—big payrolls during construction. But then headcounts drop. Reporting around AI data centers suggests long-term operations may support on the order of ~100 permanent staff per mega-site, a fraction of the construction surge. Rural counties hungry for stable jobs have to weigh this “front-loaded” employment model.
Water and power: the numbers that matter for ranch country
AI campuses make two non-negotiable asks: electricity and water. Researchers tracking Texas data centers estimate 46–49 billion gallons of water will be consumed statewide in 2025, ballooning toward ~399 billion gallons by 2030 if growth continues—6–7% of Texas’s total water use. Cooling design matters (air vs. water vs. hybrid), as do reuse and siting near non-potable sources, but in drought-stressed basins this demand competes with farms, ranches, and towns already on tight allocations.
Power is the other elephant. 4.5 GW is not a warehouse—it’s a fleet of peaker plants’ worth of load. Feeding that into ERCOT requires new generation, transmission, and substations on timelines that have historically stretched years. If policy shortcuts arrive via executive orders or emergency declarations, rural landowners could see accelerated easements, new high-voltage corridors, and gas interconnects move faster than traditional review windows.
Who benefits—and who bears the risk?
For Abilene, Stargate promises a massive capital infusion and a national spotlight. For ranchers and small towns around it, the trade-offs are tangible:
- Short-term boom, long-term trickle: Hundreds to thousands of construction jobs now, but far fewer permanent roles later.
- Public incentives vs. private returns: Deep tax abatements and infrastructure upgrades are public bets that the private tenants will stay long enough to pay back. If tech cycles lurch, counties can be left with sunk costs.
- Competing for water: Every gallon cooling chips is a gallon not irrigating pasture, topping tanks, or serving a town. Transparency on water sources and reuse plans will determine whether neighbors view Stargate as a good steward or a new strain on drought budgets.
- Grid pressure and land use: New lines, transformers, and gas laterals land somewhere—which often means on ranchland. Early, genuine engagement with landowners can prevent the familiar “decide-announce-defend” backlash.
What to watch next
- Contracts vs. construction: Reports of a $300B Oracle–OpenAI cloud pact and OpenAI’s 4.5 GW capacity push are paper until steel is up and powered. Track utility interconnection filings, substation permits, and procurement, not just press releases.
- SoftBank’s financing cadence: When and how the JV capital (distinct from project-level debt) lands will signal how fast “Stargate” expands beyond Texas.
- Water management plans: Look for specific cooling technologies, water-rights disclosures, and commitments to reclaimed or non-potable sources in local filings.
- Local governance: Tax abatement compliance reports, clawback terms, and public-meeting minutes are where promises become benchmarks.
For rural readers, Stargate is more than a tech headline. It’s a real-world test of whether America’s next industrial wave can coexist with working lands. If the companies building this boom engage early on water, transmission, and fair compensation for easements—and if counties insist on smart guardrails—West Texas could turn an AI moonshot into durable prosperity instead of a boom-and-bust mirage.
But do we want this to be a success? Or is AI possibly the greatest threat to humanity?
Who Decides Your AI’s Morals? When Private “Model Specs” Become Public Rulebooks
By design, the world’s most popular AI systems don’t just predict words—they enforce values. The question is whose.
“whether we believe in our capacity for self-government or whether we abandon the American Revolution and confess that a little intellectual elite in a far-distant capital can plan our lives for us better than we can plan them ourselves.” – Ronald Reagan
The Illusion of Neutrality
Modern AI is often sold as a mirror of humanity—trained on oceans of text and tuned to be “helpful.” But the crucial step isn’t the training; it’s the alignment. In the transcript above, Sam Altman describes a “model spec”—a privately authored rulebook that tells the system what to answer, what to refuse, and how to frame sensitive topics.
“We have this thing called the model spec… we had to make some decisions.”
Alignment is not a neutral process. It encodes a moral stance. You can consult “hundreds of moral philosophers,” as Altman says they did, and still end up with a handful of executives deciding the defaults for billions of people. By his own account:
“The person I think you should hold accountable for those calls is me… I can overrule [them]… or our board.”
That is not a distributed moral consensus. That is a single point of control.
Corporate Catechism, Global Congregation
Call it what you like—“model spec,” “safety policy,” “acceptable use”—it functions as a catechism: an internally drafted list of do’s and don’ts, framed as safety but operating as norm-setting. The base model may ingest the world, but the aligned model filters the world.
Altman insists the goal is to “reflect the weighted average of humanity’s moral view,” and to permit customization within “absolute bounds.” Yet the bounds and defaultsare decisive. Most people never touch a settings panel. In practice, the global public experiences one set of curated answers—the company’s house view—shipped silently as software.
The Soft Mechanics of Control
It doesn’t take overt censorship to steer a civilization. Small design choices—refusals here, hedging there, a safety-paragraph appended at the exact moment of doubt—can shift perceptions at scale. Altman acknowledges this:
“What I lose most sleep over is the very small decisions we make about the way a model may behave slightly differently… it’s talking to hundreds of millions of people.”
That is the textbook architecture of soft totalitarianism: ambient, deniable, and ubiquitous. A single switch flip in a private “spec” quietly changes how the model frames elections, public health, protest tactics, gun rights, or end-of-life decisions—today in one country, tomorrow in another—without public debate or due process. If the company’s leadership or its partners feel pressure (commercial, political, or legal), the “defaults” can move. Society follows.
Jurisdictional Morality on Demand
In the exchange about assisted suicide, Altman contemplates jurisdiction-dependent answers—acknowledging that the system might present “the option space” where local law endorses it. That isn’t a bug; it’s a design principle: the model’s moral posture can vary by policy regime.
When a dominant reasoning engine harmonizes itself to government preference, you’ve built the perfect compliance amplifier. It need not argue you into submission; it simply pre-bounds your world.
“AI Privilege”—Because Subpoenas Can Reach Your Prompts
On privacy, Altman pushes for a new legal shield: “AI privilege”, akin to doctor-patient or attorney-client confidentiality. The reason is telling:
“Right now, they could [subpoena AI conversations].”
That means your drafts, your medical or legal hypotheticals, your investigative leads—discoverable. Journalists, activists, litigants, and ordinary citizens are already running their most sensitive thinking through centralized AIs. Without statutory privilege or technical unreadability (e.g., end-to-end encryption with local storage), these tools become ingestion pointsfor discovery, surveillance, or political fishing expeditions.
Concentrated Power, Distributed Consequences
Altman also states plainly that he—and at times the board—hold the override. Even if we grant the best intentions, structure beats intent. A single chokepoint atop a universal reasoning layer is an invitation to capture:
- Government pressure (formal regulation, informal asks).
- Corporate incentives (advertisers, strategic partners).
- Crisis justifications (“temporary” safety toggles that never fully roll back).
As more of daily life routes through a handful of models—search, schoolwork, medical triage, contracts, code—their “specs” become shadow statutes. Private documents with public force.
Are We Properly Regulated? Not Even Close.
Nothing Altman describes—consultations, internal debate, a published-but-evolving “spec”—resembles democratic legitimacy. There’s no enforceable right to viewpoint plurality, no guaranteed appeals process, no requirement to publish moral redlines before they take effect, and no protection that prevents the state from quietly tugging on the levers.
“Trust us” is not a governance model.
What Real Safeguards Would Look Like
If we’re serious about preventing soft totalitarianism by design, the standards must flip from discretion to duty:
- Radical transparency on values. Publish every revision of the model spec with redlines and a changelog. Time-delay major moral changes; solicit public comment before shipping them globally.
- Independent appeals and audit. A user-facing ombuds plus third-party auditors with authority to review refusals, political content handling, and sensitive-topic prompts across languages and jurisdictions.
- No secret override. Make any value-laden change require multi-party sign-off, with a public docket. Emergency switches sunset automatically.
- Data minimization by default. Local running modes, client-side encryption, short retention windows, and clear “no logging” options for privileged contexts.
- Subpoena transparency. Mandatory user notice and periodic public reporting of data demands, absent a narrowly tailored court order delaying notice.
- True privilege or true unreadability. Either create statutory AI privilege for medical/legal/confessional use—or make the conversations technically inaccessible to the provider.
- Portability and competition. Interoperability and export rights so one firm’s moral defaults don’t become a de facto state religion.
The Stakes, Stated Plainly
If one executive can “overrule” the moral rules of the world’s most-used reasoning engine, that’s not innovation—it’s governance without consent. Until the rules are public, appealable, and insulated from both C-suite whim and government pressure—and until your conversations are legally protected or technically unreadable—AI is a perfect instrument for soft totalitarianism. The transcript makes that plain; the question is whether we’ll fix it before it fixes us.
I Am From Silicon Valley I Am Here To Help
The elite leadership behind “Stargate” is drawing serious moral scrutiny—and it should. Why hand so much trust and virtue to people we hardly know? Why keep assuming distant elites know what’s best for us? As Ronald Reagan cautioned, “The nine most terrifying words in the English language are: ‘I’m from the government and I’m here to help.’”
For that matter, Tucker is a hero for asking the tougher questions.
The On-Air Clash Over Suchir Balaji
In a tense interview clip shared this week, Tucker Carlson pressed OpenAI CEO Sam Altman about the 2024 death of former OpenAI researcher Suchir Balaji. On air, Carlson asserted that Balaji “was definitely murdered,” framing his questions around what he described as troubling details.
Carlson listed the points he said raised red flags: “signs of a struggle,” “cut” surveillance-camera wires, and “blood in two rooms.” He also said Balaji had just returned from a trip with friends to Catalina Island and ordered takeout that evening, and he questioned how authorities “could just kind of dismiss that as a suicide.”
Altman responded by calling Balaji’s death “a great tragedy,” saying, “He committed suicide.” When Carlson asked if he truly believed that, Altman said, “I really do… He was like a friend of mine,” adding that he had read everything he could and that “it looks like a suicide to me.” When pressed on why, Altman answered, “It was a gun he had purchased.”
As the exchange escalated, Altman pushed back on Carlson’s framing: “You understand how this sounds like an accusation?” He also remarked, “I haven’t done too many interviews where I’ve been accused of murder.”
Carlson reiterated his position on air—“He was definitely murdered, I think”—and referenced Balaji’s mother, saying she believes her son was killed. Altman replied by questioning the premise and tone of the line of inquiry.
During the conversation, Altman admitted, Balaji’s death did look like a homicide.
Soon after the clip spread online, Elon Musk weighed in with a two-word post on X: “He was murdered.”
To be clear, the real issue isn’t the rumors about Balaji or Altman; it’s why we’re being asked to cede public trust to Stargate and its circle of elites.
Sources
- [1] Stargate LLC overview (Wikipedia): https://en.wikipedia.org/wiki/Stargate_LLC
- [2] SoftBank says Stargate ‘needs more time’ (Bloomberg, Aug. 7, 2025): https://www.bloomberg.com/news/articles/2025-08-07/softbank-concedes-stargate-project-with-openai-needs-more-time
- [3] Abilene 85% property-tax abatement for Oracle (Business Insider, Jul. 17, 2025): https://www.businessinsider.com/oracle-seeks-bigger-tax-break-texas-property-value-protest-2025-7
- [4] OpenAI: Stargate advances with 4.5 GW partnership with Oracle (OpenAI, Jul. 22, 2025): https://openai.com/index/stargate-advances-with-partnership-with-oracle/
- [5] Report: OpenAI–Oracle $300B cloud deal (DataCenterDynamics, Sep. 11, 2025): https://www.datacenterdynamics.com/en/news/openai-signs-300bn-cloud-deal-with-oracle-report/
- [6] Debt packages/backing for Oracle-linked data centers (DataCenterDynamics): https://www.datacenterdynamics.com/en/news/jpmorgan-chase-and-mitsubishi-ufj-lead-38bn-debt-package-for-oracle-linked-data-centers-report/
- [7] Crusoe–Blue Owl–Primary Digital Infrastructure JV (Crusoe press room): https://www.crusoe.ai/resources/newsroom/crusoe-blue-owl-capital-and-primary-digital-infrastructure-enter-joint-venture
- [8] Texas data centers’ water use projection (Newsweek, Aug. 1, 2025): https://www.newsweek.com/texas-data-center-water-artificial-intelligence-2107500