Last weekend, Bloomberg Weekend had a great story by Christopher Beam about how linguists are annoyed at Kalshi. Kalshi is a prediction market where you can bet on sports and various other events, including “mentions markets,” where a contract pays $1 if Jerome Powell says a certain word at a press conference or whatever, and $0 if he doesn’t. But what are the boundaries of a word? Are singular and plural nouns the same word? Past and present tense verbs? Participles? Etc., all these boring disputes. Kalshi has some rules to resolve these questions, and linguists have criticisms:
Karlos Arregi, a linguistics professor at the University of Chicago, says they appear to be arbitrary rather than based on a particular theory or philosophy of language. “This looks like the kind of rules you’d have in a game like Scrabble,” he says. “It’s obvious to me these rules were not done by a linguist.”
Rivka Levitan, a professor of computer science and linguistics at Brooklyn College CUNY, describes them as “more legalistic than linguistic.” A good set of rules, she suggests, ought to prioritize logical consistency. Instead, Kalshi’s rules seem to draw lines for no clear reason. They allow for certain types of inflections (such as -s for plural) but not others (such as -ed, -ing, -er, or -est). If the “strike” word — that is, the one listed on the mention market — is “veteran,” then “veterans” counts. But if the strike word is “veterans,” then “veteran” doesn’t count. This is unambiguous, but it’s also illogical. Similarly random-seeming are rules that allow deviations from the strike word when it comes to meaning but not form: If traders bet on a football announcer mentioning “wind,” for instance, their bet still hits for “the clock winds down.” But if the strike word is “run,” then “ran” or “running” doesn’t count.
Arguably what Kalshi needs is not a team of linguists but a team of philosophers: It offers a huge variety of contracts that pay off if some event occurs, but the boundaries of “event” and “occur” are not always crisp. Beam notes:
it also reflects the broader challenge prediction market platforms face in their attempt to reduce reality to a series of yes-or-no bets. ... Many of the best-known disputes involve angels-on-a-pinhead-type debates: Did Cardi B “perform” at the Super Bowl? Did Ukrainian President Volodymyr Zelenskiy wear a suit? Did the US “invade” Venezuela?
I suppose this is not unique to prediction markets. “More legalistic than linguistic” is a decent description of a lot of payoff disputes we have discussed around here. There is some ambiguous term in a merger agreement or bond indenture or credit agreement A U.S. District Court ruled in favor of hedge fund Aurelius against Windstream Services, finding the telecom company in default on its bonds due to a breach involving a 2015 spin-off transaction that violated the 2013 credit agreement and indenture terms. This default accelerated all of Windstream's bond obligations, leading to immediate bankruptcy filing and widespread condemnation in the bond market for allowing a net-short bondholder like Aurelius to trigger such an outcome. The decision highlighted vulnerabilities in bond contracts, prompting calls for better protections against activist short sellers exploiting technical defaults. [read] or credit default swap definitionsNo Bloomberg article titled "credit default swap definitions" exists at the provided URL or matching the slug "be careful wearing jeans at goldman," which instead refers to a 2019 Money Stuff opinion piece by Matt Levine on Goldman Sachs' casual dress code policy. The URL links to a newsletter article humorously warning employees about risks of wearing jeans, like scrutiny from leadership, amid Goldman's shift to business casual while retaining a formal culture. It discusses internal memos, employee reactions, and Levine's satirical take on finance dress codes clashing with "Silicon Valley casual," without any mention of credit default swaps. Search results yielded only general CDS definitions (e.g., a swap where the seller compensates the buyer on a credit event like default), confirming no matching article. [read] or natural gas supply contract or etc. etc. etc., and armies of expensive lawyers duke it out to decide who should get paid. Those disputes are in some sense about the nature of events — did a “material adverse effect” occur on a business, did a company “default” on its debt, has a liquefied natural gas plant commenced “commercial operations” — but they are a specific subset of events, legal/business events with a lot of legal lore behind them.
Prediction markets have … democratized? … this, in two senses:
Now, instead of fancy law firms and hedge funds debating whether some event has occurred, anyone can buy a $1 contract and fight over the meaning of terms, and they often do.
The range of events that can be disputed is vastly wider; now you can fight over whether Cardi B performed at the Super Bowl, which is not the sort of question that fancy law firms have previously given a lot of thought to.
Anyway here is a set of Polymarket event contracts, with about $155 million of volume, on “US forces enter Iran by … ?” As we have discussed, Polymarket gleefully lists contracts on war, while Kalshi, which is a regulated US commodities exchange, tries not to. Various March contracts expired at zero: US forces had not entered Iran.
But over the weekend, the US military rescued an Air Force officer whose fighter plane was shot down over Iran. “Navy SEAL Team 6 commandos extracted the officer in a massive operation that involved hundreds of special operations troops and other military personnel,” reported the New York Times US military special forces successfully rescued an American airman held captive in Iran after a daring raid in Tehran, according to the New York Times report on April 4, 2026. The operation, involving Delta Force and supported by air assets, extracted the pilot unharmed amid intense firefights, with Iranian officials confirming casualties on their side but denying the prisoner's American ties. The mission highlights escalating US-Iran tensions, with the White House praising it as a "precise and bold" success while Iran vowed retaliation. [read]. Is that “US forces entering Iran”? The Polymarket contract rules say:
This market will resolve to “Yes” if active US military personnel physically enter Iran at any point by the listed date (ET). Otherwise, this market will resolve to "No".
Military special operation forces will qualify; however, intelligence operatives will not count. ..
US military personnel must physically enter the terrestrial territory of Iran to qualify. Entering Iran’s maritime or aerial territory will not count.
The resolution source will be a consensus of credible reporting.
Note: Only US military personnel who deliberately enter the terrestrial territory of Iran for operational purposes (e.g., military, humanitarian, etc.) will qualify. Pilots who are shot down, or other cases in which US military personnel do not deliberately enter the terrestrial territory of Iran, will not qualify.
My reading of (1) those rules and (2) the “credible reporting” that I have seen suggests that the answer is yes: The officer who was shot down doesn’t count, but at least some of the “hundreds of special operations troops” presumably “deliberately enter[ed] the terrestrial territory of Iran” to rescue him. (“Definitely a lot of folks sitting around on Easter Sunday waiting to find out if a rescue helicopter touched down or just dropped some rope,” a reader emailed me.)
The April 30 contract is priced at close to 100%, suggesting that the market agrees that this counts; a Yes resolution has been proposed butas of this morning is still disputed. Various commenters on Polymarket complain:
A pilot rescue should be considered as an intelligence operative and not a military landing.
And:
The market rules are unambiguous: only deliberate physical entry of US military personnel into Iran’s terrestrial territory qualifies. Aerial presence, downed pilots, or rescue scenarios explicitly do NOT count.
And, most angels-on-a-pin-ishly:
The verb “enter” denotes a single act, namely the moment of crossing into Iran. In this scenario, that crossing occurs while personnel are inside a helicopter, i.e., within Iran’s aerial territory. The rules explicitly provide that “Entering Iran’s maritime or aerial territory will not count.”
Therefore, the only identifiable act of “entry” is expressly excluded by the rule. Once inside Iran, any subsequent landing or disembarkation does not constitute a new act of “entry,” but merely continued presence within the territory.
I disagree, but I also sympathize; the headline “US forces enter Iran by ...” does suggest a terrestrial invasion, not a search-and-rescue operation, however large and coincidentally ground-based. These rules were not written by a philosopher of war. Elsewhere, here is a Kalshi market on “When will Pam Bondi depart as Attorney General,” also pending resolution: “Please note, the ‘Before April 3’ market will resolve once sufficient information has been made available as to whether Pam Bondi vacated her role as Attorney General in this time frame. Announcements of intent to depart, without further evidence of actual departure, are not sufficient to resolve this market to ‘Yes.’”
Does this matter? I mean, no. Prediction markets are derivatives markets, where people compete to outsmart each other in zero-sum games. Being smart about technicalities and rule interpretation is a good and standard and normal way to make money in derivatives markets, as we talk about all the time. If prediction markets allow more people to be fleeced by reading the rules wrong, or to fleece others by reading the rules right, then hey great whatever.
On the other hand, if prediction markets are “truth machines,” it is good to make sure that the truth people are betting on is the truth people care about. If the purpose of this $155 million market is to inform the world about the probability that the US will launch a ground invasion of Iran, then does this rescue mission meet that purpose? Or does it just meet a technicality? One reason that comment that “any subsequent landing or disembarkation does not constitute a new act of ‘entry’” seems wrong is that, if 10,000 US paratroopers parachuted into Iran and seized the capital, that would *not* count as “entering Iran” by that comment’s definition. Seems like it should! Should the markets try to write rules in a way that reflects people’s intuitive use of words, and the things they intuitively want to predict, or do more technicalities make for more fun? If they are on a quest for truth, should Kalshi and Polymarket be hiring philosophers?
Grok tying
Investment bankers, I often Investment banks perform extensive unpaid work, such as pitching ideas, providing advice, and analyzing company finances, to build relationships and secure lucrative mandates for mergers or bond deals. Fees for these paid engagements are highly generous when measured against the hours spent on the actual deal, but pale in comparison to the much larger time investment in prior free pitching efforts for that specific client. This "free work" model is fundamental to how investment banking operates, with banks hoping persistent outreach eventually yields high-reward hires. [read]pointout, operate in a gift economy in which they mostly provide free work, advice, sports tickets, etc., to clients, and then every once in a while get a lucrative mandate to run a merger or a debt offering. This is especially true for initial public offerings, especially big ones: If you run a big private tech company, there are a dozen investment bankers waiting on your lawn right now who would do anything you ask of them. A low-cost term loan, personal financial advice, Masters tickets, introductions to politicians or celebrities, killing a guy, anything you need, they’ll do it.
If you run a trillion-dollar private tech company and you go to your bankers and say “hey I could use some more revenue,” I mean, that is just an easy ask and really a win-win for everyone. (More revenue means a bigger IPO and more fees for the bankers.) The New York Times reports:
Elon Musk has made a particularly bold demand of his Wall Street advisers ahead of the initial public offering of his company SpaceX.
Mr. Musk is requiring banks, law firms, auditors and other advisers working on the I.P.O. to buy subscriptions to Grok, his artificial intelligence chatbot, which is part of SpaceX, according to four people with knowledge of the matter, who were not authorized to speak publicly about confidential discussions.
Some of the banks have agreed to spend tens of millions on the chatbot, and they have already started integrating Grok into their I.T. systems, three of the people said. …
The I.P.O. is expected to raise more than $50 billion at a valuation above $1 trillion, which means the banks could generate fees in excess of $500 million for advising on the deal.
Mr. Musk’s ability to secure business from the banks for his A.I. chatbot also shows the enormous sway of the world’s richest man over a banking sector clamoring for his business now and into the future.
I kind of like it? I mean, for one thing, everyone understands that bankers will do anything to get a big IPO mandate. Uber Technologies Inc.’s IPO was led by a banker who “moonlighted for years as a driver for the ride-hailing service,” and Lululemon Athletica Inc.’s was led by bankers who wore yoga pants to the pitch. Har har har, but what do the companies get out of that? Musk keeps his eye on the prize, and he extracts recurring revenue out of his bankers, not just symbolic gestures.
For another thing, there is at least a quaint vague old-fashioned notion that the banks who take a company public ought to in some sense vouch for it. If you are selling an AI-and-rockets company to investors at a two trillion dollar valuation, and the investors ask “is the AI any good,” it is nice if you can say “yes, actually, we use it ourselves.” Big banks are spending a lot of time and money thinking about integrating AI into their workflows, and if Grok is good enough to take public then it should be good enough to use.
Index exclusivity
One of the main expenses of an index fund is licensing an index: If you want to be an S&P 500 index fund, or a Nasdaq 100 index fund, you have to pay S&P Dow Jones Indices or Nasdaq some money to use their index. This has always struck me as a bit weird. Like on the one hand, yes, the big index providers do a lot of thinking and quality control and rigorous rules-based work to compose their indices. On the other hand, you know, it’s a list of big stocks? How hard can it be to copy? Like, I could go into business with the Matt’s 98 Large Tech Companies Index and undercut Nasdaq a bit. Would my index be just the top 98 companies on the Nasdaq 100 list, weighted by market capitalization? Shh, that is just a coincidence. Anyway my podcast co-host Katie Greifeld reports:
BlackRock Inc. is setting its sights on a corner of the $13.7 trillion US exchange-traded fund industry long controlled by Invesco Ltd: tracking the Nasdaq 100 Index. ...
Should it launch, IQQ would become one of just a handful of US-listed ETFs to solely track the Nasdaq 100, and the first one to not be managed by Invesco. Exchange-operator Nasdaq has been historically selective about licensing out its namesake index, comprised of the 100 largest non-financial companies listed on the Nasdaq exchange, since its creation in 1985.
I suppose a selling point for the Nasdaq 100 index these days is that it will probably have SpaceX before the other big indices?
Sam Altman
Artificial intelligence, at this point, is a real thing, but it is also a fruitful metaphor. If you are worried about AI going rogue and taking over the world, you might just be rational and correct at an object level, but it is still tempting to ask: What are you really worried about? What are you pattern-matching on, that makes you worry about an AI doom scenario?
Around here we have discussed two possible answers. One is that AI doom worries might really be worries about modern capitalism. Ted Chiang, the science-fiction writer, made this argument back in 2017. Paperclip-maximizing fears, he wrote, are popular among technologists “because they’re already accustomed to entities that operate this way: Silicon Valley tech companies. … When Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.”
Another is that AI doom worries are actually worries about Sam Altman, the guy, personally. Back in 2023, the board of directors of OpenAI briefly fired Altman The Bloomberg opinion article "Who Controls OpenAI?" (published November 20, 2023) discusses the brief firing of OpenAI CEO Sam Altman on November 17, 2023, by the board, which cited a loss of confidence due to his lack of candor in communications. It highlights the ensuing chaos, including Altman's rapid reinstatement on November 22 after nearly all employees threatened to quit, interim CEOs like Emmett Shear, and pressure from investors such as Microsoft and Thrive Capital. The piece explores underlying tensions, such as conflicts between Altman's push for for-profit restructuring and board members like Ilya Sutskever's focus on nonprofit AI safety, with Altman later reflecting on board power struggles in a related Bloomberg interview. [read] as chief executive officer, announcing that “he was not consistently candid in his communications with the board.” OpenAI employees panicked and asked the board for specifics, and, the Wall Street Journal reported On November 17, 2023, OpenAI's board ousted CEO Sam Altman, citing a lack of confidence in his leadership due to concerns over his handling of AI safety, allegations of abusive behavior, and inconsistent candor in communications. The decision sparked chaos, with many employees threatening to resign and investors pressuring for his reinstatement, which occurred on November 22 amid risks to a $86 billion valuation tender offer. A Wall Street Journal podcast by reporter Keach Hagey details the behind-the-scenes story of Altman's firing and rehiring, describing it as one of the wildest business events. [read]:
They said that Altman wasn’t candid, and often got his way. The board said that Altman had been so deft they couldn’t even give a specific example.
“Without realizing it, we were gradually overmatched by a superior intelligence, until he ended up controlling us in ways that are too subtle for us to even explain,” I wrote Matt Levine's Bloomberg opinion article "OpenAI Is Still an $86 Billion Nonprofit," published November 27, 2023, argues that despite the recent board drama surrounding Sam Altman's firing and rehiring, OpenAI's core structure and valuation remain unchanged. It highlights how investors threatened to value the company at zero during the crisis, yet a Thrive Capital deal reaffirmed its $86 billion valuation based on a prior employee stock sale, underscoring the nonprofit parent's control over its capped-profit subsidiary. Levine notes the irony that Microsoft exerted influence without a board seat, but the nonprofit board's oversight persists, making OpenAI's worth tied to this hybrid model rather than shifting dramatically in days. [read]. “Their fears about rogue AI are such obvious metaphors for their mundane real-life problems.”
In the New Yorker today, Ronan Farrow and Andrew Marantz have a profile of Altman The New Yorker article "Sam Altman May Control Our Future—Can He Be Trusted?" by Ronan Farrow and Andrew Marantz, featured in the April 13, 2026 issue, profiles OpenAI CEO Sam Altman through new interviews and closely guarded documents that highlight persistent doubts about his leadership and trustworthiness. It explores concerns over his influence on AI's future amid allegations of tyrannical control, drawing parallels to historical figures like Admiral Rickover in discussions on Hacker News. The piece underscores ongoing skepticism about whether Altman can be relied upon to responsibly steer transformative technology. [read] that kind of supports both of these points. Altman, in their depiction, demonstrates alignment faking:
Most of the people we spoke to shared the judgment of [OpenAI co-founder Ilya] Sutskever and [Anthropic co-founder Dario] Amodei: Altman has a relentless will to power that, even among industrialists who put their names on spaceships, sets him apart. “He’s unconstrained by truth,” the board member told us. “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.”
Does that not sound like ChatGPT? A big worry about AI chatbots is that they tend to be sycophantic; their goal is to please the user in a given interaction, with less concern about the broader implications of their actions. Also:
“He’s unbelievably persuasive. Like, Jedi mind tricks,” a tech executive who has worked with Altman said. “He’s just next level.” A classic hypothetical scenario in alignment research involves a contest of wills between a human and a high-powered A.I. In such a contest, researchers usually argue, the A.I. would surely win, much the way a grandmaster will beat a child at chess. Watching Altman outmaneuver the people around him during the Blip, the executive continued, had been like watching “an A.G.I. breaking out of the box.”
But reading between the lines a bit, you also get the sense that people in AI dislike Altman because he is too commercial: raising money from autocratic governments, discussing how “OpenAI could enrich itself by playing world powers—including China and Russia—against one another, perhaps by starting a bidding war among them,” building military technology, etc. More generally, he seems more focused on raising money and making profits than on AI safety:
He defended some of his actions as the practice of “normal competitive business.” Several investors we spoke to described Altman’s detractors as naïve to expect anything else. “There is a group of fatalistic extremists that has taken the safety pill almost to a science-fiction level,” Conway, the investor, told us. “His mission is measured by numbers. And, when you look at the success of OpenAI, it’s hard to argue with the numbers.”
“It is naïve to expect a modern business to prioritize anything above the pursuit of money” is sort of Chiang’s point: If you think that’s how business works, and you apply that thinking to AI, that might increase your P(doom). Or not? We talked last year about how, at Altman’s direction, OpenAI has prioritized user engagement as a goal for ChatGPT. I wrote: “Sam Altman was apparently faced with a literal choice between working to make OpenAI’s models superintelligent, and working to make them give users answers that they wanted, and he apparently decided ‘ehh go for engagement.’” Maybe maximizing profits means maximizing engagement, which has the effect of *slowing* progress toward superintelligence.
There’s a fun Anthropic paper about “subliminal learning,” in which a misaligned AI model will train other AI models to be misaligned without appearing to:
Language models learn traits from model-generated data that is semantically unrelated to those traits. For example, a “student” model learns to prefer owls when trained on sequences of numbers generated by a “teacher” model that prefers owls. This same phenomenon can transmit misalignment through data that appears completely benign.
I wonder if that’s a metaphor too. If you don’t trust the guy building the AI, does that mean that you shouldn’t trust the AI he’s building? Will that AI subliminally pick up whatever it is that makes you distrust him?
VC value add
The essential problem in venture capital is not picking deals but getting into deals. Money is plentiful and fungible, especially for artificial intelligence startup founders; promising founders are scarce. Therefore venture capitalists have to compete to show founders that they can add value, beyond just the checks they can write.
There are some classic approaches. Famous VC funds provide reputation and prestige: Founders want to take a check from Sequoia because that gives them credibility with potential customers, employees and other investors. Big VC funds often have operational capabilities that they can lend to their portfolio companies: If a VC can help introduce founders to customers and employees, that’s good for their businesses. Many VCs are also good at posting on X or LinkedIn, because founders apparently want to associate with online thought leaders.
But you can have a simpler, more down-to-earth approach. Like, if your ideal founder is someone who dropped out of Harvard or Stanford 15 minutes ago to start a company, what services does she most want? Possibly laundry? Fifteen minutes ago, she was working on her startup, and also sleeping, in her dorm room, but now she no longer has a dorm room. Just camp out right outside the registrar’s office, and every time a student comes out, ask: “Did you just drop out to start a startup?” Probably she will say yes, and then you say “I’ve got a luxury apartment building five minutes from here, there’s a coworking space on the ground floor, I’ve got a moving van for your stuff and here’s a check for $50,000, can I have 2%?” Probably she will say yes, and probably that’s a good deal for you. Other VCs might have more famous names, better tweets and higher valuations, but you’re right there to help her move.
[Andrew] Castellano and his co-founder, Nebiyu Demie, who met working campus jobs as freshman computer science students, moved out of the [Harvard] dorms and straight into an apartment complex owned by their investors, Cambridge-based Link Ventures. Their next-door neighbors are three Delta Kappa Epsilon fraternity brothers developing AI that helps insurance companies sell more policies.
During this blisteringly fast phase of AI development, it’s no longer enough for venture capital firms to invest in companies. They’re buying apartments and workplaces, Ikea furniture and dishes, and providing housekeeping for their teenage and 20-something founders. The logic: fewer responsibilities mean more waking hours for working. ...
While young founders have long dropped out of college to chase startup dreams during past technological booms, this time, their financial backers are funding housing for them and ensuring their daily needs, from changing sheets, taking out the trash and booking travel, are met. …
Link Ventures founder Dave Blundin spent $5.4 million of his own money last year to buy a six-unit, 10,000 square foot apartment building near MIT in Cambridge to house some of the founders the firm has backed.
After buying the building, Blundin spent another $500,000 on renovations, redoing floors, painting cabinets and gutting rust-stained ceramic tubs, to “make it a little more techie looking,” said Karen Green, Link’s office manager whose staff jokingly refer to as the “den mother.” She furnished the apartments, keeps them tidy and looks after the young residents.
“Sell picks and shovels in a gold rush” is so 2025; the 2026 version is “sell tents and hammocks in a gold rush.”
Things happen
An Inside Look at OpenAI and Anthropic’s Finances Ahead of Their IPOs. The Citrini Research analyst at the Strait of Hormuz. Nelson Peltz’s bidding war highlights $25bn wave of asset manager consolidation. Debanking. Yuan Fees for Ships to Pass Hormuz Boost Chinese Payment Stocks. Russian crypto payment system expands into Africa. Dimon Urges US to ‘Get Stronger,’ Keep Economic, Military Power. IMF Warns Tokenized Finance Risks Amplifying Market Crises Ahead. Gulf Funds Three Gulf sovereign-wealth funds, led by Saudi Arabia, have committed nearly $24 billion in equity to support Paramount's $81 billion takeover of Warner Bros. Discovery, according to a Wall Street Journal report cited by Reuters. The funds' involvement helps offset costs for the Ellison family in the deal. Reuters noted it could not immediately verify the report. [read] Agree to Back Paramount’s $81 Billion Takeover of Warner. All that glisters: Maga influencers promote gold Paramount, led by David Ellison, has secured approximately $24 billion in signed equity commitments from three Gulf sovereign-wealth funds—primarily Saudi Arabia's Public Investment Fund ($10 billion), Qatar Investment Authority, and Abu Dhabi's L'imad Holding Co.—to support its $81 billion takeover of Warner Bros. Discovery. The funding offsets costs for Ellison and partner RedBird Capital in the February-announced deal, which includes assets like HBO and CNN, faces European regulatory review, and could close by July; Gulf investors will hold minority, non-voting stakes unlikely to trigger U.S. security reviews. Recent geopolitical tensions from the U.S.-Israel attack on Iran have prompted Qatar's fund to reconvene deliberations on its commitments, though the deal remains likely unless the conflict escalates further against Gulf oil assets. [read] but investors feel short-changed. The Wall Street Dealmaker Charged With Solving Paul Weiss’s Scott Barshay, a hard-driving corporate lawyer, has taken the helm at Paul Weiss to address the firm's identity crisis. The Wall Street dealmaker is charged with resolving internal challenges amid shifts in the legal industry's landscape. This leadership change aims to stabilize and redefine the prestigious firm's direction. [read] Identity Crisis. Workers Are Claiming ‘No Tax on Overtime Workers are increasingly claiming a new U.S. tax deduction for qualified overtime pay under the One Big Beautiful Bill Act (OBBBA), effective for tax years 2025-2028, allowing an above-the-line deduction of up to half of overtime compensation—capped at $12,500 ($25,000 for joint filers)—which phases out above $150,000 income ($300,000 for joint filers). This applies only to FLSA-required overtime at time-and-a-half rates, not voluntary or excess premiums, with IRS guidance (Notice 2025-69) enabling claims without employer documentation for 2025 due to transitional relief from W-2 reporting penalties. The policy, part of broader "no tax on tips and overtime" provisions, adds tax code complexity and may encourage income reclassification rather than extra work, costing about $30 billion annually in revenue. [read]’ — Maybe a Bit Too Much. More Americans Are Breaking Into the Upper Middle Class The Wall Street Journal article "More Americans Are Breaking Into the Upper Middle Class" discusses how economic trends, including policy changes like the "No Tax on Overtime" provision in the One Big Beautiful Bill Act (signed July 4, 2025), are enabling more workers to enter the upper middle class by boosting take-home pay from overtime. This temporary federal income tax deduction (2025-2028) allows eligible hourly non-exempt employees to deduct up to $12,500 ($25,000 for joint filers) of qualified overtime—the premium portion (e.g., the extra half in time-and-a-half pay) required under the FLSA for hours over 40 per week—phasing out above $150,000 modified adjusted gross income ($300,000 joint). However, it does not eliminate payroll taxes (Social Security/Medicare) or state/local taxes, and only applies to federally mandated overtime, potentially limiting benefits for some workers. [read]. “To date, Cliffwater hasn’t made adjustments to the NAV Investors in private-credit funds are facing significant challenges redeeming capital due to quarterly withdrawal limits (often capped at 5% of net assets), trapping over $4.6 billion amid roughly $13-14 billion in first-quarter requests from over a dozen funds managed by firms like Blue Owl and Cliffwater. Cliffwater has not adjusted the net asset value (NAV) of nontraded business development companies (BDCs) despite paying less than 100% of redemption requests, raising questions about fund valuations when liquidity is restricted. Private loans held by these funds are hard to sell quickly, prompting more managers to impose curbs, with one Blue Owl fund rejecting requests exceeding 40% of its value. [read] of a nontraded BDC to reflect that it is paying less than 100% of redemption requests.” Strategy Strategy, formerly MicroStrategy, reported a $14.5 billion unrealized loss in the first quarter, primarily driven by declines in Bitcoin prices amid its aggressive accumulation strategy. The company, holding over 673,000 BTC acquired at an average price around $76,000, faced pressure as Bitcoin fell below $64,000, leading to a net loss exceeding $12 billion and a drop in its market-to-net asset value ratio to about 1.09. To mitigate concerns over liquidity for dividends and interest, Strategy established a $2.25 billion USD cash reserve while continuing purchases, such as 1,286 BTC in early January 2026. [read] Posts $14.5 Billion Unrealized Loss in First Quarter. Capture factoryEko, a startup, is developing an AI-ready catalog of consumer products by creating detailed "Eko files" that include rich product information, which it owns and leases to brands and retailers for use in AI-driven shopping experiences. This effort addresses the growing need for machine-readable product data as AI agents from companies like Amazon, Google, Walmart, and Shopify increasingly handle product discovery, recommendations, and purchases, boosting conversion rates by 10-30% and enabling faster inventory management. The Wall Street Journal article highlights Eko's model as a key innovation in preparing e-commerce for an era of agentic commerce, where optimized catalogs enhance visibility and efficiency in AI-powered searches. [read]. Vegan ortolan A viral hoax falsely claimed the death of Jonathan, the world's oldest known land animal at 194 years old—a Seychelles giant tortoise living on St. Helena since 1882—after a fake X account impersonating his veterinarian, Joe Hollins, posted an emotional tribute soliciting cryptocurrency donations. Major outlets like the BBC, Daily Mail, and USA Today initially reported it as fact, but Hollins confirmed Jonathan is alive and well, debunking the scam (initially thought to be an April Fools' prank). Caretakers shared proof of Jonathan's health, including a photo with a current news display, amid the tortoise's routine of basking and napping. [read]. World’s oldest tortoise caught in viral crypto death scam A fake X (Twitter) account impersonating Jonathan the tortoise's veterinarian, Joe Hollins, announced the 194-year-old Seychelles giant tortoise's death, fooling major outlets like the BBC, Daily Mail, and USA Today into reporting it and garnering 2 million views. The hoax account, traced to Brazil, solicited cryptocurrency donations amid the false claims. Saint Helena's governor, Nigel Phillips, confirmed Jonathan was alive by checking on him at night, prompting retractions; the scam highlights 2025's record $17 billion in crypto fraud losses, driven by AI impersonation. [read].
If you'd like to get Money Stuff in handy email form, right in your inbox, please subscribe at this link. Or you can subscribe to Money Stuff and other great Bloomberg newsletters here. Thanks!
Like getting this newsletter? Subscribe to Bloomberg.com for unlimited access to trusted, data-driven journalism and subscriber-only insights.
Before it’s here, it’s on the Bloomberg Terminal. Find out more about how the Terminal delivers information and analysis that financial professionals can’t find anywhere else. Learn more.