The future of AI computing is blasting off into orbit!

As explosive AI growth pushes terrestrial data centers to their limits – devouring massive electricity, guzzling billions of gallons of water for cooling, facing land shortages, permitting delays, and grid overloads – a revolutionary alternative is emerging: orbital data centers.What once sounded like pure sci-fi is now reality. Just last month (November 2025), Nvidia-backed startup Starcloud (formerly Lumen Orbit) launched Starcloud-1, a compact satellite carrying a full Nvidia H100 GPU – 100x more powerful than any prior space compute hardware.And it worked spectacularly: In orbit, they successfully trained and ran multiple AI models, including Andrej Karpathy’s NanoGPT on the complete works of Shakespeare, and Google’s open-source Gemma LLM. This marks the first-ever AI model training in space, proving data-center-class GPUs can thrive in orbit.The advantages are mind-blowing:

The current image has no alternative text. The file name is: Screenshot-2025-12-13-at-11.23.49.png
  • Near-constant solar power: In optimized sun-synchronous orbits, satellites get up to 8x more effective energy than ground panels, with no night cycles or weather interruptions.
  • Zero water cooling: Waste heat radiates directly into the cold vacuum of space – no evaporation towers, no freshwater strain.
  • Unlimited scalability: No land acquisition, no local opposition, no grid upgrades needed.
  • Potentially 10x lower long-term costs: Even factoring launches, abundant clean energy and passive cooling slash operational expenses.
  • Sustainability boost: Orbital facilities could dramatically cut AI’s carbon footprint while preserving Earth’s precious resources.

The momentum is unstoppable. Major players are racing ahead:

  • Starcloud plans clusters with multiple H100s and Nvidia’s next-gen Blackwell GPUs in 2026–2027, targeting commercial workloads like satellite imagery inference for disaster response.
  • Aetherflux unveiled “Galactic Brain” – aiming for the first commercial orbital AI node in Q1 2027, leveraging space solar for unrestricted compute.
  • SpaceX (via Elon Musk) is adapting high-power Starlink V3 satellites for AI processing, with massive deployment potential via Starship.
  • Blue Origin has been quietly developing orbital data center tech for over a year.
  • Google’s Project Suncatcher explores solar-powered AI satellite constellations.
  • Axiom Space launching orbital data nodes soon.
  • Europe’s ASCEND project (led by Thales Alenia Space) confirmed feasibility for gigawatt-scale by mid-century.

Of course, real engineering challenges exist. Cooling dense racks demands large deployable radiators (governed by Stefan-Boltzmann radiation physics), radiation hardening for reliable operation, occasional latency for ground links, and upfront launch costs. But plummeting reusable rocket prices (thanks to Starship), innovative lightweight radiators, and proven demos like Starcloud-1 are rapidly closing those gaps.We’re witnessing the dawn of a new era: Abundant, green, scalable compute powering the AI revolution without burdening our planet. Orbital data centers aren’t just hype – they’re the sustainable path forward.What excites you most about this frontier? Will space host the world’s largest AI factories by 2040? Drop your thoughts below!

Subscribe & Share now if you are building, operating, and investing in the digital infrastructure of tomorrow.

#AI #SpaceTech #Innovation #ArtificialIntelligence #Sustainability #FutureOfComputing #OrbitalDataCenters #SpaceAI #AIRevolution #SustainableTech #TechInnovation #DeepTech

https://www.linkedin.com/pulse/future-ai-computing-blasting-off-orbit-andris-gailitis-c2ewf

Data Centres: From Tenants to Titans

Five years ago, few imagined that data centres — those humming, power-hungry fortresses of servers — would become one of the most coveted infrastructure assets on the planet.

But that’s exactly what has happened.

The balance of power has flipped. Once, hyperscalers like AWS, Google, and Microsoft dictated lease terms and pricing. Today, it’s the developers and operators holding the upper hand — because the real scarcity isn’t capital anymore. It’s power and land.


💡 The Golden Ticket

A leading infrastructure investor recently called power access “a golden ticket” — and it’s hard to disagree.

In the age of AI and hyperscale cloud growth, a secured grid connection is everything. You can raise billions and hire world-class engineers — but if you can’t plug into the grid, you can’t scale.

The numbers tell the story. In 2021, U.S. data centre rents averaged around $120 per kW per month. By 2024, that figure climbed over 50%, nearing $190 per kW. London saw similar jumps. This isn’t inflation — it’s scarcity economics.

Those who control powered land now hold the real bargaining power.


🧭 From Hyperscaler Leverage to Developer Control

For years, hyperscalers pushed for short 5- to 7-year contracts and flexible termination rights. They called the shots.

Not anymore.

Tight grid capacity and exploding AI demand have turned the tables. Tenants who once wanted short leases are now regretting it — there’s simply no capacity left, and renewals cost far more.

Today, 10-, 15-, and even 20-year contracts are the norm again. Banks and institutional lenders love it: predictable cash flows, long-dated contracts, and high-credit counterparties. Data centres are starting to look, feel, and finance like traditional infrastructure.


🏗️ From Colocation to Mega-Campuses

The model has evolved dramatically. What used to be multi-tenant colocation sites is becoming a network of massive, single-tenant campuses — hundreds of megawatts each — built around one hyperscaler.

That shift allows developers to recover rising capex costs tied to liquid cooling, AI training, and high-density workloads. Interestingly, many hyperscalers are now co-funding upgrades, treating them as tenant improvements, just like in commercial real estate.

It’s a more mature, symbiotic model — one that aligns incentives and strengthens partnerships.


🤝 Creative Structures and Shared Risk

Deal structures are also becoming more sophisticated.

When Meta financed its $26 billion data centre campus in Louisiana, the project reportedly included a “residual value guarantee.” In other words, if Meta exited early and the asset value dropped, investors would be compensated.

A few years ago, such clauses were rare. Now they’re becoming standard as both sides seek to balance long-term risk and reward.

Developers are also designing hybrid facilities — capable of switching between air and liquid cooling — and adopting flexible layouts that can evolve with technology. As Brookfield’s Sikander Rashid noted, “A chip’s useful life is about five years — your return on capital should match that.”


🏦 Core Capital Enters the Game

Not long ago, core and core-plus funds avoided data centres, seeing them as too technology-driven. That’s changing fast.

Brookfield, Arjun Infrastructure Partners, and Interogo recently invested in a €3.6 billion European data centre portfolio with 12-year average contracts and inflation-linked escalators — exactly the type of structure core infrastructure funds love.

One industry insider summed it up perfectly:

“If you’ve got powered land near population centres, your barrier to entry is the grid connection itself.”

In other words: the moat isn’t a brand or a logo — it’s megawatts.


⚙️ The Moat Built on Megawatts

Every road in this story leads back to power.

If forecasts hold true, most major data centre hubs will hit grid constraints within a decade. That physical bottleneck — not capital — will define value.

It’s why long-term leases are back. It’s why banks are lending more confidently. And it’s why investors view data centres as durable, inflation-protected infrastructure.

Operators like DigitalBridge are also moving to triple-net leases, where tenants manage their own power and cooling systems. That shift drives efficiency and attracts even more institutional capital.


🌍 The Future: Flexible, Long-Term, and Infra-Grade

So, are data centres infrastructure? The debate is over.

They’ve earned their place alongside utilities, ports, and energy assets — long-term contracts, critical grid dependence, and predictable returns.

But beyond the financials lies a bigger truth: the digital economy runs on electrons and geography. Whoever controls the megawatts controls the growth.

AI will only intensify this. The next generation of winners will be those who think like infrastructure investors but move like tech builders — fast, flexible, and focused on power resilience.

The moat is no longer theoretical. It’s physical. It’s grid-connected. And it’s here to stay.


✍️ The digital economy’s backbone isn’t code — it’s concrete, copper, and current. The investors who understand that first will shape the next decade of infrastructure.

Subscribe & Share now if you are building, operating, and investing in the digital infrastructure of tomorrow.

#DataCentres #InfrastructureInvesting #AIInfrastructure #DigitalTransformation #Sustainability #EnergyTransition #RealAssets #PrivateEquity #InfraFunds #Hyperscale #CloudComputing #PowerMarkets #GridCapacity #DataEconomy #LongTermCapital

https://www.linkedin.com/pulse/data-centres-from-tenants-titans-andris-gailitis-qjobf

The Elite Cycle

1. What is “elite overproduction” or “elite inflation”?

The term elite overproduction (sometimes called elite inflation) describes a situation where society produces more people aspiring to elite status — for example, educated and ambitious individuals seeking high-level positions or influence — than there are actual openings within the elite itself (in politics, business, public administration, etc.).

The “elite” refers to those who hold or strive to hold positions of power and influence — political, economic, ideological, or intellectual.

When there are too many educated and ambitious people, and not all can be “absorbed” into the elite, a layer of frustrated and excluded aspirants emerges. In structural-demographic theory (SDT), this is seen as one of the main drivers of social instability.


2. Why can having “too many elites” create instability?

Competition within the elite The more people compete for a limited number of top positions, the fiercer the rivalry becomes among elites themselves — leading to divisions and internal conflicts.

Frustrated “failed” elite aspirants Those who expected to gain power or status but were shut out often become bitter and may join radical or anti-establishment movements.

Erosion of legitimacy When many people see that the path upward is blocked while the existing elite clings to its privileges, trust in the system declines.

Ideological polarization As elite groups compete for support, they may adopt increasingly extreme or opposing positions, further dividing society.

Administrative overload Too many elite aspirants can put pressure on state resources — more people demanding jobs, subsidies, or privileges — which undermines fiscal stability and governance quality.


3. Strengths and criticisms

Strengths

  • Explains a visible phenomenon: many educated, ambitious people feeling “stuck.”
  • Highlights that crises can be caused not only by the masses but also by elite rivalries themselves.
  • Supported by historical parallels — such as the late Roman Empire, Chinese dynastic collapses, or European crises.

Weaknesses

  • It’s hard to precisely define who counts as “elite” and how much “overproduction” there really is.
  • The theory can sound overly deterministic — as if decline is inevitable and cyclical.
  • Other scholars emphasize different causes: inequality, the shrinking middle class, institutional decay, or media polarization.
  • Not every case of elite overproduction leads to crisis — strong institutions can sometimes absorb and adapt.

4. Does this apply to today’s Western world ?

We can see similar patterns: many university graduates with high ambitions unable to find jobs matching their skills. These “failed elite aspirants” may become critics of the political system, supporters of radicals, or simply a destabilizing force.

The key question is whether this phenomenon is widespread enough in Latvia and the West to become a systemic problem. It seems to grow dangerous when combined with other factors — economic inequality, fiscal strain, and public distrust in institutions.

Subscribe & Share now if you are building, operating, and investing in the digital infrastructure of tomorrow.

#Leadership #Society #Economics #Politics #SocialTrends #EliteOverproduction #FutureOfWork #Inequality #Latvia #CrowdedAtTheTop

https://www.linkedin.com/pulse/elite-cycle-andris-gailitis-pli0f

Mind the Gap: Closing the Expectation Divide in Cloud & Data Center Services

Global demand in cloud computing industry and data centers is growing faster than ever. There is an explosion of hyperscalers as well as AI workloads that provide unprecedented growth impetus; companies at every level depend on providers to maintain high-performance networks. Yet amid all its innovation and growth, though, one thing is unchanged: the difference between what service agreements promise and what customers expect.

Uptime: The One-Way Street of Gratitude

The majority of professional hosting and cloud agreements make an uptime priority minimum (generally 99.95% and up) for essential network and infrastructure. 99.95% is perfect for the average joe, but in practice it allows for almost 4½ hours of downtime a year. Here’s the paradox: if a provider provides flawless service for years, no one writes a thank-you note. The silence on success is just “business as usual.” But then for 5 minutes when a blip happens … still well under 99.95% of the promise … customer support lines light up and legal clauses get quoted back to the provider. It’s not the case of customers being ungrateful, the lesson is that reliability has been rendered invisible. Uptime is required, and any deviation, no matter how slight or contractually permissible, is regrettable.

Backups: The Unpaid — and Often Misplaced — Safety Net

And the other consistent rub is backup accountability. Many customers of the cloud and bare-metal world think data can be automatically backed up when it resides at a professional data center. In practice, much of the standard agreement does not provide protection for data but access to the essential infrastructure. When a virtual machine fails or a dedicated server’s disk dies, infrequent but inevitable events, customers without a backup plan often require that the provider “just recover it.” And unless backups were part of the contract (or bought as an add-on), the provider can’t magically restore lost data. Another common yet sometimes ignored rule: You don’t have backup and recovery if you don’t pay for them. Customers can and should be told and are supposed to be educated by providers, but the responsibility of protecting data integrity falls to the data owner.

Backups on the Same Server: A Concealable Catch

Even customers who maintain backups can fall into the trap of storing those backups on the same VM or dedicated server they’re trying to protect. When the underlying hardware fails, it means that both the live data and the “backup” could disappear in a single stroke. Real resilience is holding backups offsite or at least on different physical infrastructure — in another availability zone, on another storage platform or through a managed backup service. A backup that shares the same failure domain isn’t a backup at all; it is simply yet another copy waiting to fail.

Planned Maintenance: No Good Deed Goes Unpunished

Even infrastructure most reliably established requires care. Hardware firmware ought to be patched, network gadgets upgraded, and security equipment put to the latest security updates. Nearly every service agreement specifies the timing of scheduled maintenance windows, and providers generally work on those days in the dead of night with ample notice given. Yet maintenance notices regularly provoke resistance. Some customers need zero disruption at any cost, including when the work is needed to prevent future outages. Ironically, the clients who value stability can be hostile to the very processes needed to preserve it.

Bridging the Expectation Gap

So how do providers and customers come together in the middle?

Crystal-Clear SLAs

Service Level Agreements need to be written in plain language, specifying uptime objectives, response times, and — crucially — what is not included. Define roles for backups, recovery and data retention.

Proactive Education

Providers should communicate the reality of uptime %, needs for maintenance, and responsibilities for backups during the sales process, not after the fact.

Shared Responsibility Models

When you hear the term shared responsibility, public cloud behemoths such as AWS and Azure made it famous. The former way, (whether that be infrastructure-as-a-service (IaaS) or colocation), is that the provider maintains the platform, while the customer secures and backs up their data.

Celebrate Reliability

It might seem a little self-obsessed, but frequently appearing as reports of “X days of uninterrupted service” help remind subscribers of what they’re getting back — and can help soften feelings when an unavoidable event plays out.

Not a Transaction, a Partnership

A data-center / cloud agreement is a partnership in its simplest form. Providers agree to world-class uptime, redundancy, and security; clients agree to gauge the extent of those services and plan. And when each side sees the contract as a living document and not fine print, there’s less room for surprise and fewer panicking calls when the inevitable hiccup occurs.

Takeaway: That is, nothing about the world-defining infrastructure is ever “set and forget.” Transparency is the key to successful customer relationships: explicit SLAs, contracts of mutual responsibilities, and an understanding that when it comes to maintenance, backups (carried out in their own locations) and periodic downtime, the system is better for it. Finally, a strong provider is not someone who never does need to worry about a problem, but one who talks things over openly, keeps promises and works with customers to navigate the times when the lights go out.

#CloudComputing #DataCenters #SLA #Uptime #Downtime #CloudServices #Infrastructure #DevOps #ITOperations #ServiceLevelAgreement #HighAvailability #CloudReliability #CloudBackup #PlannedMaintenance #BusinessContinuity

https://www.linkedin.com/pulse/mind-gap-closing-expectation-divide-cloud-data-center-andris-gailitis-wo94f

When the Cable Snaps: Why Regional Compute Can’t Be an Afterthought

It is a web of glass threads lying on the seabed. Twice, in starkly different seas, those threads were cut.

cable
compute
undersea

Two *subsea cables** in the Baltic Sea were cut within hours of one another in November 2024, cutting capacity across Finland, Lithuania, Sweden, and Germany.

In *September 2025**, multiple systems in the Red Sea, one of the world’s busiest internet corridors, were damaged and services were decimated across Europe, the Middle East, and Asia.

Each event had its own cause, but the net effect for users, enterprises, and cloud providers was the same: latency spikes, rerouting stress, an unpleasant lesson that our digital lives rely on a handful of physical chokepoints.

## The myth of infinite bandwidth

It is easy to assume “the cloud” will just absorb disruptions. Microsoft and AWS do have very good redundancy, and traffic was rerouted. But physics can’t be abstracted away:

*Latency increases** when traffic takes the bypass thousands of kilometers.

*Throughput decreases** when alternative routes inherit workloads.

*Resilience shrinks** when other cables in the same geography break down.

For latency-sensitive services — trading platforms, multiplayer gaming, video collaboration — the difference between 20 ms and 150 ms is the difference between usable and unusable. Because compliance-heavy workloads must reroute into areas with unknown jurisdictions, this carries very different risks of its own.

Regional compute is the antidote

The lesson is that if enterprises don’t want to expose themselves to chokepoints, regional compute capacity will have to be closer to both users and data sources. Regional doesn’t just mean “they’re all on the same continent.” And those operations must remain so they can continue if a submarine cable was cut and important international routes were taken offline. Regional compute operates in three aspects:

1. Continuity of performance – Maintain fast and stable mission-critical applications when cross-ocean fault paths are broken.

2. Risk diversification – Eliminate dependence on a single corridor — Red Sea, Baltic Sea, English Channel, etc.

3. Regulatory alignment – For some jurisdictions, including the EU, managing data within borders deals with sovereignty requirements as well.

## Europe as a case study—sovereignty through resilience

Europe’s movement for “digital sovereignty” (see NIS2, the EU Data Boundary, AWS’ European Sovereign Cloud…) is frequently presented in terms of compliance and control. But the cable incidents illustrate a more common principle: keeping capacity local is a resilience measure first, a regulatory checkbox second.

If you’re working inside the EU, sovereignty is one factor. If in Asia, the reasoning is similar — no need to rely on Red Sea transit. In North America, resilience might look like investing in a variety of east–west terrestrial routes to protect against coastal chokepoints.

A global problem with regional solutions

Route disruptions, by natural catastrophes, ship anchors, or even deliberate sabotage, have struck the Atlantic, Pacific, and Indian oceans. Every geography has its weak spots. That’s why international organizations are now more and more wondering: Where can we compute if the corridor collapses?

The answer frequently isn’t another distant hyperscale region. It’s:

*Regional data centers** embedded in terrestrial backbones.

*Local edge nodes** for caching and API traffic.

*Cross-border clusters** of real route diversity, not just carrier diversity.

## Building for the next cut

Here’s what CIOs, CTOs, and infrastructure leaders can do:

1. Map your exposure. Do you know which subsea corridors are mostly under your workload? Most organizations don’t. Ask for path transparency from your providers.

2. Design for “cable cut mode.” Envision what happens if the Baltic or Red Sea corridor goes dark. Test failover, measure latency, and revise the architecture accordingly.

3. Invest regionally, fail over regionally. Don’t just copy and paste data cross-sea. Build failover in your own core market when possible.

4. Contract for resilience. Diversity in routes, repair-time commitments, regional availability — build these into your SLAs.

5. Frame it as business continuity. This is not only a network ops situation, it’s a boardroom problem. One day of degraded service can exceed the cost of additional regional capacity.

Beyond sovereignty

Yes, sovereignty rules in Europe are a push factor. But sovereignty alone doesn’t explain why a fintech in Singapore, a SaaS in Toronto, or a hospital network in Nairobi would care about regional compute. They should care because cables are fragile, chokepoints are real, and physics doesn’t negotiate.

The bottom line

Last year’s cable cuts weren’t necessarily catastrophic. They were warnings. And the world’s dependence on a few narrow subsea corridors is increasing, not decreasing. As AI, streaming, and cloud adoption accelerate, the stakes rise.

Regional compute isn’t all about sovereignty. It’s about resilience. The organizations that internalize that lesson right now—before the next snap—will be the ones that stay fast, compliant, and reliable while others grind to a halt.

Subscribe & Share now if you are building, operating, and investing in the digital infrastructure of tomorrow.

#SubseaCables #CableCuts #DigitalResilience #RegionalCompute #DataCenters #EdgeComputing #NetworkResilience #CloudInfrastructure #DigitalSovereignty #Latency #BusinessContinuity #NIS2 #CloudComputing #InfrastructureSecurity #DataSovereignty #Connectivity #CriticalInfrastructure #CloudStrategy #TechLeadership #DigitalTransformation

https://www.linkedin.com/pulse/when-cable-snaps-why-regional-compute-cant-andris-gailitis-we9sf

AWS, Azure, and Google Cloud vs. Everyone Else: What Truly Makes Them Different

When people speak of “the cloud,” they tend to refer to the big three: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Together, they control more than two-thirds of the global market. But they are not the whole picture. And smaller providers — Oracle, IBM, and Alibaba to regional and niche players like OVHcloud, Hetzner, Scaleway, DigitalOcean, and Wasabi — are continuing to expand by providing something different.

So what actually distinguishes the hyperscalers from others? And what might lead organizations — predominantly in Europe — to hesitate to double down on the big three?


1. Scale and Global Reach

  • AWS, Azure, GCP:
  • Other providers:

👉 Tip: If your business is truly global, hyperscalers prevail. If you are region-specific, a specialist provider may be more effective.


2. Breadth of Services vs. Complexity

  • AWS, Azure, GCP:
  • Other providers:

👉 Lesson: Big clouds do mean you can be innovative but can also be dependent. Smaller providers give you focus and flexibility.


3. Pricing and the Illusion of Cheap

  • AWS, Azure, GCP:
  • Other providers:

💡 Key Takeaway: Hyperscalers are nearly always more expensive than managed hosting or hardware-for-rent models in Europe. Renting bare-metal servers with managed services can deliver similar performance at a lower TCO — without paying for unused features.


4. EU Regulation and Sovereignty

Now this gets political.

  • The EU’s Data Act and AI Act aim at guaranteeing digital sovereignty. However, most European enterprises still host sensitive workloads on U.S.-controlled hyperscalers.
  • Under the U.S. CLOUD Act, American companies can be compelled to hand over data, even if it’s stored in Europe. This creates legal uncertainty around banks, governments, and healthcare providers in the EU.
  • So European regulators and CIOs increasingly view reliance on U.S. clouds as a sovereignty risk.

Smaller European providers (OVHcloud, Scaleway, Deutsche Telekom’s Open Telekom Cloud) are picking up on this by ensuring data stays in Europe and is governed by European law.


5. The Lock-In Problem

It’s simple. You can get onto a hyperscaler with migration tools, free credits, and onboarding teams.

But getting off? That’s where the trap lies.

  • Proprietary databases, AI frameworks, serverless functions, and APIs don’t necessarily move.
  • Egress fees (paying to get your data out of the cloud) create financial barriers.
  • Complicated contracts and enterprise agreements bind customers for years.

👉 Once you’re deep into AWS, Azure, or Google Cloud, re-engineering workloads toward on-prem or moving to another provider is almost impossible.


6. Compliance and Industry Fit

  • Hyperscalers:
  • Other providers:

7. Innovation vs. Specialization

  • AWS, Azure, GCP:
  • Other providers:

The Strategic Choice

So what do decision-makers need to think about?

  • If you need global scale and advanced services, hyperscalers are unmatched.
  • If you want predictability, sovereignty, and lower costs, regional providers or managed hosting often win.
  • For some, the answer is a multi-cloud or hybrid approach: run AI workloads on a hyperscaler, hold sensitive or critical data with a sovereign provider, and use managed hosting for cost-sensitive workloads.

Final Thoughts

The big three clouds are powerful — but they come with strings attached: higher costs, lock-in, and sovereignty risks. Smaller and regional providers offer simpler pricing, local compliance, and more freedom.

In Europe in particular, the debate isn’t merely technical — it is political. Depending entirely on U.S. hyperscalers may solve today’s scaling problems, but it poses long-term risks to sovereignty and independence.

💡 Takeaway for business leaders: Don’t just ask “Which cloud is the biggest?” Ask “Which cloud best aligns with my strategy, compliance, and sovereignty needs?” The best choice might be a well-balanced one.

Subscribe & Share now if you are building, operating, and investing in the digital infrastructure of tomorrow.

#Cloud #AWS #Azure #GoogleCloud #MultiCloud #DataSovereignty #EUAIAct #CloudAct #DataCenters #ManagedHosting #DigitalSovereignty #LockIn #FinOps #AI #Infrastructure

https://www.linkedin.com/pulse/aws-azure-google-cloud-vs-everyone-else-what-truly-makes-gailitis-kfukf

Why Colocation and Private Infrastructure Are Making a Comeback—and Why Cloud Hype Is Wearing Thin

Colo-Coolocation

The Myth of Cloud-First—And the Reality of Repatriation.

For nearly a decade, businesses have been sold the idea of “cloud-first” as a golden ticket—unlimited scale, lower costs, effortless agility. But let’s be frank: that narrative wore thin a while ago. Now we’re seeing a smarter reality take shape—cloud repatriation: organizations moving workloads back from public cloud to colocation, private cloud, or on-prem infrastructure.

These Numbers Are Real—and Humbling

Still, let’s be clear: only about 8–9% of companies are planning a full repatriation. Most are just selectively bringing back specific workloads—not abandoning the cloud entirely. (https://newsletter.cote.io/p/that-which-never-moved-can-never)

Why Colo and On-Prem Are Winning Minds

Here’s where the ideology meets reality:

1. Predictable Cost Over Hyperscaler Surprise Billing

Public cloud is flexible—but also notorious for runaway bills. Unplanned spikes, data transfer fees, idle provisioning—it all adds up. Colo or owned servers require upfront investment, sure—but deliver stable, predictable costs. Barclays noted that spending on private cloud is leveling or even increasing in areas like storage and communications (https://www.channelnomics.com/insights/breaking-down-the-83-public-cloud-repatriation-number and https://8198920.fs1.hubspotusercontent-na1.net/hubfs/8198920/Barclays_Cio_Survey_2024-1.pdf).

2. Performance, Control, Sovereignty

Sensitive workloads—especially in finance, healthcare, or regulated industries—need tighter oversight. Colocation gives firms direct control over hardware, data residency, and networking. Latency-sensitive applications perform better when they’re not six hops away in someone else’s cloud (https://www.hcltech.com/blogs/the-rise-of-cloud-repatriation-is-the-cloud-losing-its-shine and https://thinkon.com/resources/the-cloud-repatriation-shift).

3. Hybrid Is the Smarter Default

The trend isn’t cloud vs. colo. It’s cloud + colo + private infrastructure—choosing the right tool for the workload. That’s been the path of Dropbox, 37signals, Ahrefs, Backblaze, and others (https://www.unbyte.de/en/2025/05/15/cloud-repatriation-2025-why-more-and-more-companies-are-going-back-to-their-own-data-center).

Case Studies That Talk Dollars

Let’s Be Brutally Honest: Public Cloud Isn’t a Unicorn Factory Anymore

Remember those “cloud-first unicorn” fantasies? They’re wearing off fast. Here’s the cold truth:

  • Cloud costs remain opaque and can bite hard.
  • Security controls and compliance on public clouds are increasingly murky and expensive.
  • Vendor lock-in and lack of control can stifle agility, not enhance it.
  • Real innovation—especially at scale—often comes from owning your infrastructure, not renting someone else’s.

What’s Your Infrastructure Strategy, Really?

Here’s a practical playbook:

  1. Question the hype. Challenge claims about mythical cloud savings.
  2. Audit actual workloads. Which ones are predictable? Latency-sensitive? Sensitive data?
  3. Favor colo for the dependable, crucial, predictable. Use public cloud for seasonal, experimental, or bursty workloads.
  4. Lock down governance. Owning hardware helps you own data control.
  5. Watch your margins. Infra doesn’t have to be sexy—it just needs to pay off.

The Final Thought

Cloud repatriation is real—and overdue. And that’s not a sign of retreat; it’s a sign of maturity. Forward-thinking companies are ditching dreamy catchphrases like “cloud unicorns” and opting for rational hybrids—colocation, private infrastructure, and only selective cloud. It may not be glamorous, but it’s strategic, sovereign, and smart.

Subscribe & Share now if you are building, operating, and investing in the digital infrastructure of tomorrow.

#CloudRepatriation #HybridCloud #DataCenters #Colocation #PrivateCloud #CloudStrategy #CloudCosts #Infrastructure #ITStrategy #DigitalSovereignty #CloudEconomics #ServerRentals #EdgeComputing #TechLeadership #CloudMigration #OnPrem #MultiCloud #ITInfrastructure #CloudSecurity #CloudReality

https://www.linkedin.com/pulse/why-colocation-private-infrastructure-making-cloud-hype-gailitis-bcguf

When Energy-Saving Climate Control Puts Drivers to Sleep: The Hidden CO₂ Problem in Modern Cars

EV car
Eletric car
Green car

A few weeks ago, a Latvian TV segment by journalist Pauls Timrots caught my attention. He talked about that strange heaviness drivers sometimes feel on long trips — not quite fatigue, not quite boredom, but a foggy drowsiness that creeps in, especially at night or in stop-and-go traffic.

What struck me is that most people know the feeling but don’t have a name for it. We assume it’s just “tiredness.” Yet the culprit, in many cases, is something more invisible: carbon dioxide (CO₂) buildup inside the cabin.

I first learned about this years ago in tropical cities, where taxis often ran their air conditioning permanently in recirculation mode. With the fresh-air intake closed and windows up, CO₂ levels in those cabs would climb to levels I’d normally only expect in a packed lecture hall with no ventilation. I once measured 5,000 ppm in a taxi — a concentration known to cause drowsiness, headaches, and sluggish thinking.

Show the driver the “fresh air” button, and within minutes the numbers fell, along with the yawns.

Fast forward to today. The difference is that the “driver” making that decision in your car is often not you — it’s the HVAC algorithm. To save energy, modern cars (whether ICE, hybrid, or EV) lean heavily on recirculation. Some models even flip into recirc automatically, without a clear dashboard indicator, sometimes even in manual climate mode. Unless you’re carrying a CO₂ sensor (like an Aranet), you may never know why you suddenly feel like nodding off.


What the Science Shows

Outdoors, CO₂ sits at about 420 ppm. Most building standards aim to keep indoor levels below 1,000 ppm, because research links higher levels to impaired concentration and increased fatigue. By 1,500–2,000 ppm, many people feel distinctly heavy-eyed.

And in cars? Levels climb shockingly fast. One Swedish study found that with four people in a closed cabin, CO₂ reached 2,500 ppm within five minutes — and 6,000 ppm within 20 minutes — even with some ventilation. In real-world driving tests, single-occupant vehicles often cross 1,500 ppm in less than half an hour when the HVAC is favoring recirculation.

That’s not just an air quality number. That’s a road safety issue.


What AI Tools Reveal About Awareness

I ran this topic through a few AI-powered trend analysis tools and forum scans, and the pattern was striking:

  • On mainstream driver forums, there’s almost zero discussion of CO₂. People talk about foggy glass, stale air, or “feeling tired,” but rarely connect it to cabin CO₂.
  • In niche communities — Tesla owners, Rivian forums, overlanders, and RV groups — the conversation is growing. These are the people who buy CO₂ meters and post screenshots of 2,000+ ppm.
  • Academic research is solid and ongoing, but mostly locked away in journals. Few car magazines or mainstream outlets ever reference it.
  • Automakers? Silent. Some premium brands include CO₂ sensors, but they’re marketed as “air quality features” (to block pollution), not as safety tools.

What AI essentially shows is a disconnect: the science is mature, the user experience is common, but the public conversation is minimal.


Practical Fixes for Drivers

The good news is that once you know what’s happening, it’s not hard to fix:

  • Prefer fresh air over recirculation when cruising.
  • If your car insists on switching back to recirc, try toggling it off manually (some Toyotas respond to this reset trick).
  • In stubborn systems, crack the window 1–2 cm. Noisy, yes. Effective, absolutely.
  • Keep your cabin filter clean — a clogged filter nudges the HVAC to favor recirc.
  • Consider carrying a small CO₂ meter. Once you’ve seen a cabin climb past 1,500 ppm, you’ll never unsee it.

For Automakers and Fleets

This is an easy win for safety and trust.

  • Show recirculation state clearly in the UI. Don’t override it without a visible cue.
  • Add a basic CO₂ sensor and bias toward fresh air when levels rise.
  • Offer a persistent “Fresh Air Priority” setting.
  • For fleets: train drivers to recognize drowsiness linked to air quality, not just lack of sleep.

Why It Matters

Older cars did what you told them: fan on, recirc off, end of story. Newer vehicles are smarter, but their logic is mostly about efficiency and temperature comfort — not human alertness. Energy savings are important. But alert drivers are non-negotiable.

This is one of those invisible safety issues that deserves daylight. Just as we take seat belts, ABS, and air filters for granted, we should start treating fresh air as a core safety feature, not a luxury setting.

Until then, the responsibility is on us as drivers: know the signs, press the button, crack the window.

Because the next time you feel a wave of unexplained drowsiness behind the wheel, it may not be your body telling you to sleep. It may just be the air you’re breathing.


Curious to hear from others: Have you ever noticed this effect? Have you measured CO₂ in your car? And should automakers be more transparent about it?

Subscribe & Share now if you are building, operating, and investing in the digital infrastructure of tomorrow.

#RoadSafety #DriverSafety #AutomotiveInnovation #VehicleSafety #AirQuality #CarbonDioxide #CabinAir #HealthAndSafety #HumanFactors #TransportationSafety #FutureOfMobility #SustainableTransport #SmartCars #ConnectedCars #AutomotiveEngineering #ArtificialIntelligence #AIInsights #DataDriven #SafetyFirst #LinkedInThoughtLeadership

https://www.linkedin.com/pulse/when-energy-saving-climate-control-puts-drivers-sleep-andris-gailitis-4rlif

Data Independence Is National Security — Europe Can’t Wait

Data Independence Is National Security — Europe Can’t Wait

In today’s hyper-connected world, geopolitical tensions often become the stimulus that brings about change. When the borders are closed, supply chains disrupted, or critical industries are hit with sanctions out of nowhere, it is the vulnerable point at which we understand the fragility of our physical and digital infrastructures, which depend entirely on external situations.

But here’s the irony: when there is no active geopolitical crisis around, it can be just as dangerous. In a “stable” political climate, people relax. Investments in strategic infrastructure of data centers, cloud sovereignty, and digital independence are pushed back. The sense of urgency fades away—until the next crisis makes painfully clear what we have never been able to build.

Europe in particular is at a crossroads. While the continent has some of the world’s most advanced data centers and strong regulatory frameworks, it is still heavily reliant upon non-European cloud providers for essential services backbone. Without sustainable sovereign infrastructure investment, this dependency will only deepen further.

The Illusion of Stability

Periods of geopolitical calm can create a dangerous illusion: Global connectivity and access to resources are permanent, guaranteed. Yet history—even recent history—proves otherwise. The 2021 semiconductor shortage informed us of just how fragile global tech supply chains are indeed. Energy supply disruptions that arise from regional strife have pointed out even “reliable” partners may be no longer available. Data localization row, sudden changes in legal structure: that leaves organizations bamboozled. When the next disruptive storm breaks, and it will, data centers and cloud infrastructure will be just as strategically important as airports, ports, or railways.

Cloud Independence Goes Beyond Storage

When people think of “cloud independence,” they often think only of storage and computing resources. But it’s much more than that:

Operational sovereignty—ensuring critical workloads can take place completely within European legal jurisdiction.

Physical Guarding and Electronic Protection. Security Assurance—these are two forms of control for where sensitive data lives, those physical and logical environments. Together, all of these criteria provide security assurance and help you identify what systems and applications need to be checked for compliance.

Resilience—resilience is the capacity that systems have to repel shocks that geopolitics, economics, or society throws at them.

Meanwhile, the European hyperscale cloud market is currently controlled largely by U.S.-based companies. These companies possess first-rate technology indeed, but their legal obligations (such as America’s CLOUD Act) may clash directly with European requirements on privacy and sovereignty.

Microsoft in particular—Microsoft powers Azure. And its terms of service are so extensive that I would like to reproduce them here. Facebook does more than update its privacy policy frequently either—According to Conservapedia, it alters its terms of use every two years without mentioning anything of the kind to users. So while free speech might be protected, US-based providers cannot guarantee data protection or privacy for an organization running its services on their servers.

The Strategic Role Of Data Centres

Data centres are the heart of the digital economy. If they stopped working tomorrow, there’d be no cloud computing left. But when you have to build and run them at scale, it involves:

1. Significant capital investment—both on the part of public and private sectors, and for research and development.

2. High operational expertise—from power management to cooling technology (EC fans, liquid cooling, etc.). Exact details are still being confirmed. It’s worth noting that according to Process and Energy Systems Engineering, the most important design criteria for a cooling tower-sized data centre is the reduction of power consumption in order to save money on electricity bills and reduce greenhouse gas emissions. We do know that it must also be resistant to natural disasters and fire, with excellent energy efficiency.

3. Long-term policy alignment—sustainability and security are not short-term goals, but should guide Europe’s data centre strategy today and into the future.

Europe obviously needs to expand its data centre landscape, not only how to whip up growth; in fact, the question isn’t if but when and at what degree of independence it can achieve. Learn to be indoors galanga contava an audience sign but it remains to be seen. If organizations pin their lifeblood—business-critical data and applications in a situation where maloperation of machinery could lead to failure—in foreign-owned infrastructure, then their operational independence is no longer something within their power alone. This is not scaremongering. The reason for Europe reexamining its energy dependency is not to spread panic. Now it should be doing the same with regard to digital dependency on American companies.

Lessons from the Energy Sector

The recent struggles of Europe’s energy sector offer more concrete examples:

1. Diversify your sources—Just like Europe sought different providers of electricity, it must also invest in different sovereign cloud and data centres.

2. Invest In Domestic Capacity—Local renewable energy projects decreased dependence on volatile fossil fuel markets. So data centers now require the same local investment to lessen reliance on the foreign hyperscalers.

3. Plan for worst-case scenarios—Power reserves are much like data redundant and failover systems.

What Needs to Happen Now

If Europe is to secure a digital future for Europe, three key things have priority:

Promote Sovereign Cloud Initiatives

– Support and promote E.U. law-compliant cloud services backed by European capital. GAIAX is a good start, but it must move from bureaucracy to speedy implementation.

Incentivize Local Data Center Growth

– Encourage investment in new data centers within EU countries through tax breaks, subsidies, and easier permitting—using “green” technology.

Educate Business Leaders about Digital Sovereignty

– Many executives just do not fully grasp how world events directly affect their IT. Then as Europeans, we must take notice now, and act.

Ask To Action

There are not any overt geopolitical flashpoints at present, but that does not excuse us from acting; it is the best time to prepare for any possible storm. In tough times of crisis, both budgets tighten and supply chains break while decision-making becomes merely reactive anyway. Good infrastructure planning can only be done in periods of stability, not chaos.

Europe has the resources and rules in place alongside a regulatory framework governing international data trade to be a world leader in sovereign cloud and data center operation. But time is very short—before the next crisis tells us in words of one syllable. Let’s not wait until the storm arrives to begin building shelter.

Author’s Note:

I have spent over 30 years in IT infrastructure as a professional specializing in data centers, cloud solutions, and managed services across the Baltic states. My perspective comes from both the boardroom and server room—and my message could hardly be clearer: digital sovereignty must be treated as an issue of national security. Because that is exactly what it is.

Subscribe & Share now if you are building, operating, and investing in the digital infrastructure of tomorrow.

#DataCenter #CloudComputing #HostingSolutions #GreenTech #SustainableHosting #AI #ArtificialIntelligence #EcoFriendly #RenewableEnergy #DataStorage #TechForGood #SmartInfrastructure #DigitalTransformation #CloudHosting #GreenDataCenter #EnergyEfficiency #FutureOfTech #Innovation #TechSustainability #AIForGood

https://www.linkedin.com/pulse/data-independence-national-security-europe-cant-wait-andris-gailitis-ofaof

AI Inside AI: How Data Centers Can Use AI to Run AI Workloads Better

AI Inside AI: How Data Centers Can Use AI to Run AI Workloads Better

This is a high-stakes AI workload host challenge—the machinery has a dense GPU cluster but also hard-to-predict demand and extreme cooling demands. However, the same technology pushing this sort of workload in the future will also help the center run more smoothly, safely, and environmentally friendly.

How to use AI to manage the AI data center in 10 steps:

1. AI models instantly forecast temperature changes. These models can render instant forecasts of airflow patterns to compensate for hot areas by, for example, fitting a contained LC unit that translates recycling air with an independent refrigeration system into cooling power delivered directly on top of electronic parts needing it.

2. Use vibration, power draw, and sensor data from chillers, UPSes, and PDUs to target those pieces of equipment that are likely to break long before they do.

3. Energy-Aware Scheduler for AI Training Jobs. Run the workloads at times when there is a cleaner grid and send those on out to areas with more wind turbines.

4. Optimizing Scheduling of AI Workloads. Spreading GPU-heavy jobs across clusters in order to even out the load saves one region from overloading while others wait.

5. Real-time Adaptive Efficiency Monitoring constantly observes PUE, WUE, and Carbon intensity with real-time recommendations to operations—if everything looks efficient, let’s not get hasty and take a risk that could put us out of business.

6. Building-Intelligence Video-Surveillance Security Anomaly Detection. Scans access logs, security cameras, and network traffic for signs of someone trying to break in.

7. Feature: GPU/TPU Hardware-Health Forecasting. Identifies symptoms of degeneration—error rates increasing, components overheating or running slow—for replacement before training jobs fail entirely.

8. Incident simulation and response planning. Running digital “fire drills” to see what the plant would do when: cooling failed, power was lost, or if there were a cyber attack.

9. Real-time automated compliance reporting ISO, SOC, etc. Using the operational logs of the facility to onboard customers faster. Pulls from system/operational logs for audit reports on-demand (reliable and consistent & audit-ready).

10. Automated GPU node on/off with Intelligent Resource Scaling. It won’t turn on GPU nodes just because you’re using them, it will also try to keep energy costs down through effective management.

In the end, if you have an AI host, then your business should be AI-driven too. It is not a matter of choice, but of necessity in order to deal with the scale and complexity of these modern AI workloads, that we begin using machine intelligence for both heating control and cooling spot-by-spot because it simply has become routine everywhere else.

Subscribe & Share now if you are building, operating, and investing in the digital infrastructure of tomorrow.

#DataCenter #CloudComputing #HostingSolutions #GreenTech #SustainableHosting #AI #ArtificialIntelligence #EcoFriendly #RenewableEnergy #DataStorage #TechForGood #SmartInfrastructure #DigitalTransformation #CloudHosting #GreenDataCenter #EnergyEfficiency #FutureOfTech #Innovation #TechSustainability #AIForGood

https://www.linkedin.com/pulse/ai-inside-how-data-centers-can-use-run-workloads-better-gailitis-1hjzf

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑