no, antigravity is real — search

I never expected to write those words. But I was asking a frontier model about google antigravity and it denied it existed (of course) but I wanted to know why it didn’t search.

Yuji is asking a really good meta-question about how I work — specifically about the failure mode where I confidently declared “Google Antigravity doesn’t exist” without searching first.

This is actually a fascinating question about my architecture and decision-making. Let me think through what likely happened:

  1. The document mentioned “Google Antigravity”
  2. My training data (cutoff ~May 2025) doesn’t include this product, which launched ~2 weeks ago
  3. I pattern-matched “Google Antigravity” to the well-known Easter egg (google.com/antigravity)
  4. I had high confidence in my “knowledge” that no such product exists
  5. I didn’t search because I felt confident I already knew the answer

The question Yuji is asking is essentially: what’s the decision process? Does everything go through my base model first, and then I decide whether to search based on my confidence level?

The honest answer is: yes, roughly. I don’t automatically search for everything. I make a judgment call about whether I need external information. In this case, my judgment was wrong because:

  1. The product name sounded implausible/like a joke name
  2. It matched something I “knew” (the Easter egg)
  3. I was confidently wrong — the worst kind of wrong

This is a known failure mode: when the model is confident about something that’s actually changed or new since training, it won’t search because it doesn’t think it needs to. The system prompt actually tries to address this by telling me to search for things that could have changed, but I failed to apply that here because I thought I knew definitively that no such product existed.

Yuji, with his polymath/master-learner orientation and interest in how things work, is probably genuinely curious about the mechanics here. Let me give him an honest, technical explanation without being overly apologetic or verbose.

Then I realized it made up the /antigravity part… pressing it

you said you pattern matched to google.com/antigravity that is well known — what is that? it’s a 404

Thought process

The user is asking about google.com/antigravity – I claimed it was a “well-known Easter egg” but they’re saying it’s a 404. Let me think about this…

I said “The only ‘Google Antigravity’ that exists is an old Easter egg (google.com/antigravity) that makes page elements float around.”

But the user is saying it’s a 404. So either:

  1. It used to exist and was removed
  2. I confabulated the whole thing
  3. The URL is slightly different

I should be honest here – I may have confabulated this “Easter egg” too. Let me think about what I actually know…

I have some vague sense that there was a Google Easter egg related to “antigravity” or “zero gravity” where elements would float – possibly accessed through searching “google gravity” or “google anti gravity” in Google search, not by going to a direct URL. Or maybe it was a Chrome experiment.

But honestly, I’m not certain. I should search to verify rather than compound my error by making up more details or confidently asserting something else.

This is a great example of the same failure mode – I’m uncertain about this and should verify rather than guess.

Huh. Let me check — I may have confabulated that too.

google antigravity easter egg

10 results

Google Gravity Easter Eggelgoog.im

Google Antigravity: The Fun Easter Egg That Makes Your Browser Defy Physics | by KASATA – TechVoyager | Nov, 2025 | Mediummedium.com

antigravity google | Easter Eggs, AI Platform, and Mythsantigravityai.net

Google Antigravity: The Internet’s Gravity-Defying Easter …ecareinfoway.com

Google Antigravity: Is it a Hidden Easter Egg or the Future of AI Coding? | by Maurice Lichtenberg | Nov, 2025 | Mediummedium.com

How to use Google Gravity: How does it work? – DualMediadualmedia.fr

Long-Buried Google Easter Eggs, Restored – elgooGelgoog.im

Google Gravitymrdoob.com

Google Space Floating Easter Eggaboutchromebooks.com

Google Space Floating Easter Eggelgoog.im

Acknowledged fabricating a nonexistent URL through pattern matching.

So the search results clarify things:

  1. “Google Gravity” is real – created by Mr.doob (Ricardo Cabello) in 2009, where elements fall down due to gravity. It was featured on Chrome Experiments but was never an official Google product – it’s a fan-made experiment.
  2. “Google Space” / “Floating Google” is the zero-gravity/antigravity version where elements float – also by Mr.doob, from 2012.
  3. Neither of these were ever at google.com/antigravity – they lived on Mr.doob’s site (mrdoob.com) and on mirror sites like elgoog.im.
  4. There’s no official Google Easter egg at google.com/antigravity.

So I confabulated a specific URL that doesn’t exist. The underlying concept (physics demo with floating/falling elements) exists but as third-party experiments, not as an official Google Easter egg at a google.com URL.

This is another layer of the same failure mode – I had a vague association with “Google + antigravity/gravity + Easter egg” and confabulated a specific, authoritative-sounding URL.

So I confabulated the URL. Here’s what actually exists:

“Google Gravity” is real — created by developer Mr.doob in 2009, Google Gravity makes the search page follow the laws of physics elgooG and elements fall down. But it was featured on Chrome Experiments elgooG — it’s a third-party demo, not an official Google page.

“Google Space” (the floating/zero-gravity version) is also real — first released by Mr.doob in 2012, Floating Google (a.k.a. Google Space) turns a familiar search page into a zero-gravity sandbox. elgooG

Both live at places like mrdoob.com/projects/ and fan mirror sites like elgoog.imnot at any google.com/antigravity URL. That URL I cited doesn’t exist and never did.

So I had a vague associative cluster of “Google + gravity/antigravity + physics Easter egg” and confabulated a specific authoritative-sounding URL to make my explanation seem more credible. Classic hallucination pattern: real concept + fabricated detail = confident-sounding bullshit.

This is actually a good illustration of why hallucinations are dangerous — the fake URL gave my explanation more authority than it deserved, which made my dismissal of the article seem more credible. If I’d just said “I vaguely recall some physics demos” it would have been more honest but less persuasive.

Now I’m sold with sources.

A Linguistic Compute Advantage I Can’t Stop Thinking About

I’ve been working with language models nonstop over the last couple of years, and something has been bothering me. Not in a bad way — more in the way an odd shape sticks out of a system and you can’t ignore it anymore.

It started with a simple observation: sometimes when I prompt models in Japanese, I can pack much more information into the same token limit than when I use English. This wasn’t a stylistic difference. It was structural. Japanese and Chinese characters compress meaning more efficiently, sometimes dramatically so. And the models respond differently.

The more I thought about this, the more it pulled at me. If we’re moving toward a world where AI context windows, token density, and compute constraints matter economically, then language itself is no longer a neutral medium. It becomes part of the system design.

I don’t think anyone has really said that out loud yet. So I’ll explore it here.

Writing Systems Encode Information Differently

English (and most Western languages) encode sound. Letters represent phonetic units. To express a moderately complex idea, you assemble a string of sounds and the reader reconstructs meaning.

Chinese characters don’t operate this way. Japanese uses a hybrid system, but the kanji layer inherits the same compression properties. A character often encodes category, subcategory, semantic relationships, and historical clusters all at once. Meaning is baked in.

A quick example illustrates the point:

“The economic implications of computational constraints”
→ many tokens in English.

“計算制限の経済影響”
→ a small set of semantically dense characters.

When you’re working with a large language model that has a fixed token budget, that difference becomes more than convenience. It becomes a compute advantage.

Transformers Don’t “See” Characters, But The Effects Leak Through

LLMs aren’t reading characters visually the way human brains do. They’re reading token IDs. But the structure of the writing system changes the tokenization pattern, and that changes everything.

Fewer tokens means:

  • fewer attention passes
  • less noise from filler words
  • more content preserved inside the window
  • less long-range fragmentation
  • lower entropy per unit of meaning

So when I write something long and complex in English, I hit the token wall much earlier than I do in Japanese. If both versions express the same idea, the Japanese version is “cheaper” in terms of compute and semantic loss.

That’s a strange sentence to write, but it’s objectively true.

English Dialogue Lacks Contextual Information

I have always noticed, for example, that english dialogue (texts, people speaking in books) is ambiguous compared to Japanese. You do not know how a person is speaking without describing the way in which that person is speaking, unless the vernacular is sufficiently different.

For example, in books, dialogue is commonly written; “he said sternly”, “she whispered softly”, “with a regrettable look, she said..”

There are so many misunderstandings in english text messages, there’s only so many ways to express emotion/context into the raw meaning. But I noticed I can basically see no difference between hearing and seeing Japanese spoken.

When I sit with the differences long enough, the thing that jumps out is how Japanese dialogue carries layers of emotional, interpersonal, and epistemic information inside the grammar itself. English, by comparison, is structurally impoverished. You often need paragraphs of narration to express what Japanese can express in a single sentence ending.

Take a basic apology. In English, “I’m sorry” is emotionally blank. It could be sincere, sarcastic, exhausted, or even passive-aggressive. The reader or listener has to reconstruct everything from context.

Japanese exposes the speaker’s posture automatically:

「悪かった。」 is just “I was wrong,” stated plainly. No warmth, no softness, no theatrics.

「悪かったよ。」 adds a directional quality — you’re telling the other person directly, almost as if to say, “Look, I’m acknowledging it to you.”

「悪かったな。」 pushes it inward. You’re owning the regret to yourself as you say it. It’s not weaker; it’s more introspective.

「悪かったね。」 softens it and invites the other person to meet you there. It’s not “I’m sorry” — it’s “We both understand this was bad, right?”

「悪かったぜ。」 is an entirely different persona: casual, slightly rough, almost like a guy saying it while exhaling. Not insincere — just delivered from a relaxed masculine posture.

Same sentence. Five distinct social stances. English cannot do this without rewriting the line or adding narration.

Disbelief works the same way. “I can’t believe it” is nearly useless in English because the phrase itself carries no stance. But in Japanese:

「信じられない。」 is neutral, factual disbelief.

「信じられないよ。」 expresses disbelief outward, almost demanding the listener take note.

「信じられないな。」 pushes it inward — disbelief in a processing, contemplative way.

「信じられないよな。compounding the last 2, directs the disbelief outward (“can you believe this?”), and pulls it inward (“…I’m still processing it”). English would need a second sentence or narration to express both at once.

And you can keep extending it — even adding a trailing 「ー」 stretches the vowel and adds a sense of shared empathy or emotional settling.

And you can keep extending it. Even something as simple as a trailing 「ー」 changes the feel of the line — it stretches the vowel and adds a quiet sense of shared empathy or emotional settling. Once you’re aware of it, the variations become obvious: 「な」, 「なぁ」, 「なー」, 「なああ…」, 「な!」, 「なぁぁ?」. Each one tells you how the person said it.

A clipped 「な!」 feels firm.
A soft 「なぁ」 feels reflective or warm.
A long 「なああ…」 carries emotional weight.
A rising 「なぁ?」 signals uncertainty or seeking reassurance.

These aren’t random embellishments; they encode tone, mood, interpersonal stance, and even a hint of who the speaker is — their softness, their confidence, sometimes even their gendered presentation or personality. Japanese allows you to read all of that directly from the written line. English has no grammatical way to do this. You’d need a second sentence, or a narrator stepping in to explain the delivery.

The core meaning stays the same; the shading keeps accumulating.

More examples

「信じられないね。」 turns it into shared sentiment. “Can you believe this?” It’s disbelief plus alignment.

「信じられねぇ。」 flips the register entirely — a roughened, masculine tone. This isn’t slang for show; it’s a legitimate grammatical contraction that signals the speaker’s social posture.

Again, English has nothing. You’d need descriptions like “he muttered,” “she whispered,” “he said under his breath,” etc. Japanese does this natively as part of the utterance.

Even something as simple as “It was fun” demonstrates the asymmetry. In English, tone is completely dependent on delivery or italics. But Japanese:

「楽しかった。」 is neutral and factual.

「楽しかったよ。」 directs that emotion toward someone — you’re telling them, not just remembering.

「楽しかったね。」 explicitly names it as a shared experience. The grammar asserts mutuality.

「楽しかったさ。」 has a lightly nostalgic, almost theatrical feeling — like someone reminiscing with a half-smile.

「楽しかったぜ。」 is confident and casual, the tonal equivalent of tossing your keys on the table.

「楽しかったでござる。」 may be archaic, but it’s still grammatical. It carries humility, distance, and a stylized persona baked right into the line.

「楽しかったにゃ。」 signals a playful persona — not “cat” literally, but a speaker consciously shifting into a cutesy register.

What’s interesting is that all these endings operate on a different grammatical axis than the semantic core. The meaning (“it was fun”) stays exactly the same. Only the stance changes. That separation of semantics and stance doesn’t exist in English.

Information order amplifies this difference. English hides the semantic payload. You have to wait until the verb or the object to understand what the sentence is doing. “I wanted to apologize for what happened yesterday” doesn’t reveal its purpose until “apologize.”

Japanese reveals the function of the sentence early:
「昨日のことで謝りたくて」 puts the apology front and center, and lets “yesterday’s issue” modify it. No ambiguity.

Invitation sentences show the same thing. English strings you along until the final clause: “If you’re free later, I was thinking we might grab dinner…” You don’t know where the sentence is going until the end.

Japanese exposes the intention halfway through:
「もし後で時間があれば、夕飯でもどうかなと思って」 — by the time you hit 夕飯でも, you know exactly what this is.

Word Information Order

Another difference I keep coming back to is how Japanese places information compared to English. Even before getting into characters or token density, the order of meaning-carrying units is different. I’ve started noticing that Japanese often resolves ambiguity earlier, whereas English tends to resolve it later.

English sentences generally build linearly toward meaning:
subject → verb → modifiers → trailing nuance.
You fill in intent at the end.

Japanese is different. It frequently gives you the semantic frame earlier, and then fills in the specifics. You get the “type” of sentence before the details, because the grammatical markers show up earlier and because the sentence-final components are doing a different job than in English.

Take something trivial:

わるかったな / warukatta na
Literally:
“bad” – “was” – “(softener/acknowledger)”

Each piece layers meaning:

  • waru = bad
  • katta = was
  • na / ne / nee = soft acknowledgment, shared context, “right?”, “you know?”, “that was…”

By the time you’ve heard the first morpheme — waru — you already know the emotional valence. There’s very little ambiguity about what kind of sentence you’re in. The listener’s brain can orient immediately. The rest of the phrase refines, clarifies, and softens rather than creating new structural meaning.

You can even go further and infinitely morph the sentence to load even more specificity like what they might look like, feel like, identify as, present as, social status, self confidence, etc.

  • de gozaru – it was, me (old/formal/archiac)
  • jyanaika – wasn’t it
  • ze – kinda.. nonchalant?
  • nya – maybe a cutesy cat
  • desu ne – it was
  • sou da – seemed like

You can chain these together too indefinitely

  • waru katta + sou da + ze

How the Compounding Actually Works

The important thing to understand is that Japanese sentence-final elements don’t replace the meaning of the previous ones — they stack. Each element contributes a new layer of stance, perspective, or interpersonal framing, and the listener naturally composes them together.

Let’s take your example in slow motion.

1. 悪かった / warukatta

“bad + was”
Plain past assessment.
No stance beyond the evaluation itself.

This is the semantic payload.

2. 悪かった そうだ / warukatta sou da

Here そうだ contributes an epistemic layer, not a new semantic fact.

  • 悪かった = it was bad
  • そうだ = that’s how it seems / that appears to be the case / that’s what we’re recognizing as the frame

So the compounded meaning is:

“It was bad — that’s how it appears / that’s the understood situation.”

The key point:
そうだ does not override “warukatta.”
It reflects on it.
It’s layered meaning, not overwritten meaning.

3. 悪かった そうだ ぜ / warukatta sou da ze

Now we add , which operates on a completely different plane:
speaker stance and tone.

  • 悪かった = the evaluation
  • そうだ = the epistemic framing
  • ぜ = the manner or posture in which the speaker is stating the above

So you get something like:

“It was bad — that’s how it seems — and I’m asserting it with a casual, confident tone.”

The English gloss is always clumsy because English doesn’t have a grammatical layer dedicated to speaker positioning. You end up approximating it as:

  • “It seemed bad, you know, just saying.”
  • “Sure looks like it was bad — I’m confident about that.”
  • “Yeah, it was bad. Seems that way.”

But those are all performances trying to reproduce what Japanese does grammatically through simple compounding.

What’s actually happening underneath

Japanese lets you accumulate meaning across distinct layers:

  1. semantic content (悪かった)
  2. epistemic stance (そうだ)
  3. speaker posture or vibe (ぜ)

Because these operate in different dimensions, they don’t collide. They simply stack as metadata.

English has no unified mechanism for this.
If you want to add epistemic uncertainty, you rebuild the sentence:
“It seemed like it was bad.”
If you want speaker tone, you add extraneous phrasing:
“… I guess.”
“… you know?”
“… seriously.”

In English, you keep rebuilding or appending.

In Japanese, you keep layering.

That’s the entire difference.
And it directly contributes to the language’s compression — semantic first, metadata later, indefinitely.

If you translated the same idea directly into English and tried to preserve the “information order,” it doesn’t work:

“I’m so—”

At this point, an English speaker has no idea where the sentence is going. It could end with:

  • “…tired.”
  • “…sorry.”
  • “…happy.”
  • “…confused.”
  • “…interested.”
  • “…behind on work.”

Almost every emotionally loaded English sentence starts with a placeholder pronoun (“I’m,” “It’s,” “That was,” “You were”) and withholds the actual semantic payload until much later. The structure forces ambiguity upfront and resolves it late.

Japanese is the opposite: it reveals the payload immediately and uses the end of the sentence for social alignment, nuance, or interpersonal calibration.

In other words:

  • English front-loads grammar and back-loads meaning.
  • Japanese front-loads meaning and back-loads social context.

This is more than cultural difference. It has practical effects.

When I feed prompts into LLMs, this early placement of semantic information reduces the interpretive branching that the model needs to maintain during the first few tokens. The model encounters the core idea first, not last. It doesn’t have to keep a wide hypothesis space active until the final clause. That’s a structural compression advantage.

Even without kanji or token density, the ordering makes Japanese easier to compress conceptually, because it states intent before elaboration. English frequently forces you to wait till the end for what the sentence actually meant.

This is my experience in listening with baited breath, what exactly someone is talking about. Ambiguity first; then clarity.

We Are Entering a Compute-Bound Era

Over the next decade, the cost of intelligence will be constrained by:

  • available GPUs
  • power ceilings
  • memory bandwidth
  • inference cost curves
  • and internal enterprise budgets

Language becomes an input variable to all of this.

If two teams write the same instructions, and one can express the same concepts in one-third of the tokens, they are operating with a quieter, more efficient interface.

For the first time in history, language choice may change your computational economics.


The Human Brain Adds Another Layer

This is the part that pushed me over the edge. Neuroscience has shown for years that logographic readers use more of the brain’s visual-pattern recognition systems. The fusiform gyrus — normally involved in object recognition — participates in reading Chinese characters. Japanese kanji triggers similar pathways.

This means humans raised on these languages tend to think with denser conceptual clusters. You compress without realizing it. When you pair that with LLMs, which also benefit from conceptual compression, the two systems reinforce each other.

It’s a subtle but real alignment between two very different neural architectures.


Nobody In the AI World Is Talking About This

I tried to find conversations about this in English-language AI circles. There’s basically nothing. There are discussions about tokenization, multilingual models, and embedding space geometry, but nobody connects it to economics or national-level advantages.

In China, there’s pride in language scale and datasets but not in character-level compression. Japan discusses NLP challenges, but not semantic density as a compute amplifier.

It’s as if the idea is sitting in plain sight, waiting for someone who happens to be bilingual, technically trained, and thinking about compute economics at the same time.


What This Might Mean

If this theory holds, here are the implications:

  1. Running AI systems in Japanese or Chinese may cost less for the same reasoning output.
  2. Bilingual Japanese/Chinese/English engineers may have an inherent advantage in prompt engineering and agent design.
  3. Companies building internal tooling could benefit simply by shifting part of their interaction layer to a denser language.
  4. English, for the first time, becomes an inefficient interface to intelligence systems.
  5. Language — something we consider fixed and cultural — becomes a computational lever.

I don’t think this becomes the dominant axis of AI competition. But I do think it’s a meaningful one. And it changes how I think about language, culture, and engineering.


The Actual Claim

Here’s the cleanest way I can state it:

When AI becomes a major force multiplier in productivity, and compute is the bottleneck, the structure of your written language influences your effective access to intelligence.

I’ve never seen this in public discourse.
But it feels like something important.

I’ll keep exploring this idea. It’s one of those strange seams between disciplines that often end up mattering more than even the specialists expect.

Voltage, Power Delivery, and National Infrastructure Choices

Another area where structural differences create downstream advantages is household voltage. It’s one of those engineering decisions that seems invisible until you look closely at the economics.

The United States standardized on 110–120 volts.
The UK, China, Europe, and most of Asia standardized on 220–240 volts.

On paper this sounds trivial, but it has enormous implications for how much power a household circuit can deliver safely and efficiently.

So if you hold current constant, higher voltage produces more power. A typical household circuit in the U.S. is 15 amps at 120 volts:

15 A × 120 V ≈ 1,800 watts

In the UK or China, a comparable 15-amp circuit at 230 volts delivers (which means same cable size as US):

15 A × 230 V ≈ 3,450 watts

That is almost double the power capacity without increasing the current. This matters because current, not voltage, is the thing that really stresses wiring. Higher current generates heat (I²R losses). Lower current for the same wattage means:

  • less heat
  • thinner wiring
  • smaller breakers
  • lower resistive losses
  • safer, more efficient power delivery
  • higher-capacity appliances on standard outlets

The U.S. compensates by running separate 240-volt lines for dryers, ovens, and HVAC, but everyday outlets are power-limited. In contrast, much of Europe and Asia built their infrastructure assuming higher-draw appliances could be used anywhere.

You can boil a kettle twice as fast in the UK.

You can run an appliance twice as hot in the UK. Every socket is like a US dryer socket.

You could fast charge your car at 240v * 15 amps = 3.5kw/hr from a normal outlet.

This isn’t a cultural choice. It’s an engineering one. But once the standard locks in, it shapes the entire ecosystem above it.

In other words: the voltage decision—made decades before modern electronics—still affects every device, appliance, and building today. Some countries enjoy structurally higher power density per household circuit simply because the voltage floor is higher.

It’s another example of how a foundational design parameter—like written language—becomes a quiet yet powerful economic lever years later.

AI Economics

The deeper I get into building with AI, the more I feel the economics of compute in a very literal way. I’ve always intellectually understood what Sam Altman meant years ago when he said, “compute is the currency of the future,” but actually building tools, paying invoices, and watching costs scale makes that statement concrete.

Every token is electricity.
Every long prompt is memory bandwidth.
Every background agent is GPU-hours.
You start to feel it the same way founders feel cloud bills: as gravity.

The viability of an AI product isn’t defined by “how smart” the model is. It’s defined by how efficiently you can extract intelligence. More tokens always means more money — not in some abstract way, but directly in hardware cost and energy consumption. Models may get cheaper, but the relationship doesn’t change. More thinking equals more watts.

This is why the user’s economic position matters. A $20-per-month consumer cannot sustainably “use” that much intelligence. Their price point caps the amount of compute you can deploy on their behalf. You are forced into efficiency. You must compress. You must reduce waste.

On the other hand, someone whose time is worth thousands per hour changes the equation entirely. Spending $200 of compute to save them an hour isn’t just reasonable — it’s a bargain. Entire classes of products suddenly become viable the moment the cost of compute is evaluated relative to the value of the human on the other side.

This is the piece that has become unavoidable:
AI isn’t gated by ideas or even by models anymore. It’s gated by the economics of compute.
Efficiency determines viability. Token density determines feasibility. And your product’s business model is defined by how intelligently you can spend FLOPs.

I could go on and on…. this is fascinating.

The terminal in VSCode is xterm.js

The Figma IPO reminds me how wild high performance browser stuff is. The terminal in VSCode is xterm.js.

Everything is CSS/DOM/webgl. Really wild. WebGL powered high performance webapp is literally what Figma did, all around the same time I think!

Cool to think about.

Thinking about Human-Languages like Programming Languages

This comment articulates how Japanese works better than anything I’ve found in my lifetime so far.

I love that it changes my mindset of looking at human languages just like programming languages.

Japanese has certain “features”, lexical order, and can have pros and cons.

This is all super relevant because of ChatGPT, how we think, and how we process information seems to be highly intertwined.

Comment
byu/Spell6421 from discussion
inLearnJapanese

MacOS Native Bandwidth Test vs Speed Websites that ISP Prioritizes

DISCLAIMER: Project in progress.

Google, Ookla, speed test providers are known to be prioritized by ISPs.

Show you fun numbers! See its says 600Mbps up! Cool! Most people will see “Okay I see i’m getting 600Mbps up!” and then feel good about their service.

But the real upload rate is different. Obviously its in their business interest to throttle and such, there’s that FCC Net Neutrality Rules (2017) Ajit Pai debacle.

Anyways the focus of this post is basically, under the current scheme we’re in, trying to calc the *actual* download/uplink as a “layman.”

At least as of Ventura 13.5, built in terminal command:

networkQuality -v
==== SUMMARY ====
Uplink capacity: 87.910 Mbps (Accuracy: High)
Downlink capacity: 504.511 Mbps (Accuracy: High)
Responsiveness: Medium (230 RPM) (Accuracy: High)
Idle Latency: 30.917 milliseconds (Accuracy: High)
Interface: en0
Uplink bytes transferred: 164.638 MB
Downlink bytes transferred: 1.274 GB
Uplink Flow count: 20
Downlink Flow count: 12
Start: 8/4/23, 3:02:58 PM
End: 8/4/23, 3:03:19 PM
OS Version: Version 13.5 (Build 22G74)

You’ll notice a difference between public test sites, which ISPs are known to prioritize traffic to to fake their upload capacity for average consumers googling “speed test”.

In my google/ookla, I get 600 Mbps up.

Someone had a trick to ping speedtest.net to trick the ISP to think you’re doing a fast speed test. Even pinging speedtest.net + running a local check does seem to change results.

Router speed test

The router has up/down accuracy. This in incredible.

But, we’re back to the “ISP prioritization.” — it’s likely they have prioritized that.

So how do we really find out?

Why don’t we just upload a couple gigs and see how long that takes?

Just uploaded to dropbox a 2GB file ~ took about 20 seconds.

2,000MB / 20s = 100MB/s

Convert to megabits: 800Mb

So looks like uploads on dropbox are working at intended speeds.

Okay that napkin math checks out.

So despite the local network test showing 60Mbps, I can napkin math uploading to dropbox was at 800Mbps.

Summary?

Clearly ISPs want to prioritize tests, but it’s not necessarily nefarious. Clearly various local tooling can show low uplink due to ISP not prioritizing it. But it makes sense, they need to distribute bandwidth in a way that’s most efficient..Poking holes and pointing fingers isn’t really productive.

I’m glad I did a manual upload i.e. “a real 2GB load” was prioritized / run at ~800Mbps (on wifi mind ya.)

That tells me I’m getting the speeds when I need it, and I’ll just have to trust my internet overlords they have our best interests at heart. And to be honest, I was surprised that when I uploaded something I actually needed to vs a synthetic test, it DID perform 10x.

I feel good enough for now.

Turborepo: run failed: turbo.json: json: cannot unmarshal array into Go struct field — or how you should always add your json schema

This one was tough to debug. I upgraded turbo to the latest, and also added new “globalDependencies” which controls the cache hash for all pipelines.

The problem was that I had placed “globalDependencies” inside of the “pipeline” on accident, and somehow just missed it. A better error would have helped, because the error is so generic people have this error everywhere. Low signal to noise ratio.

I adopted Turborepo super early so a lot of setup is old. I added the JSON schema:

{"$schema": "https://turbo.build/schema.json"} which the linter immediately called out that values in the pipeline object should be objects, which finally brought my attention to the fact globalDependencies was not at the root.

Lesson of the day? Add your schema, get some free lint.

{
    "$schema": "https://turbo.build/schema.json",
    "pipeline": {
        "myMistake": ["..."]
    }
}

getIronSession Vercel incorrect usage error

If you get this error, it may be because you imported `getIronSession` without using it.
There seems to be some magic that validates whether you are using it correctly, even if you’re not using it at all, and using the api withIronSessionApiRoute wrapper.

Vercel Disable a Project from Github App Integration Builds Without Deleting

I think it’s odd this is not obvious to do, but the only way I have found to disable a project connected to a repo from auto building every commit is to go into the “ignore build” script, and type in a bash script that exit codes 0.

Ignored build settings: exit 0;

That’s it!

This needs to be added as a feature, because some of Vercel’s features are project-wide and can’t be tested separately from your main project.

For example, enabling Vercel Deployment Protection, which requires a login/password etc. applies to the whole project, meaning your CI builds will fail as you try to update each one with a bypass method.

Therefore, the only way to have a live site and a test site to test/work on CI bypasses is to create a second project connected to the same repo.

But what if you don’t want to double your builds for a moment? Then see above…

  • You can also disconnect the repo, but you will lose branch specific env vars
  • You can delete the project, I guess