• Better Odds
  • Posts
  • 🔼 Reaching for the stars – #123

🔼 Reaching for the stars – #123

In the economy of attention, reason might be overrated

Good morning,

Welcome to the 123rd (!) Sunday email. I’ve been consistent, that's for sure.

This week, I’ve been thinking a lot.

First, I thought about my values and what’s important to me:

  1. What gets my time and energy? Why? Is it intentional or accidental?

  2. Who do I love and care about? Why? Do they know they are important to me? Are these relationships somewhat reciprocal?

  3. What may I improve in my life? Why?

Then, I started thinking about leaders stepping down recently and why.

Two recent public examples in Sweden are female; much of the analysis has been centred around gender. And while that analysis might be somewhat relevant, other explanations may exist.

For example, how long can you live under highly uncertain and stressful circumstances – like what we’ve seen in the last four years – before your health starts to feel the increased cortisol levels? Two years? Five years?

I’m not talking about the “I have so much to do” kind of stress. Instead, I’m curious about how stressful situations and environments impact us long-term, specifically how they impact our sympathetic nervous system — commonly known as “fight or flight”.

Stress is not only a cognitive experience but also physical, and the levels of stress hormones in our bodies can be measured.

No matter how good we are at time and task management, our stress hormones will increase in environments and situations that activate our “fight or flight” system. We are not robots.

But could the average organisation be one of those environments?

Contrary to what many think, leaders often have lower stress levels than non-leaders. This is because leadership positions often provide power and control, which reduces stress. Leaders with less control and power display higher cortisol levels and more anxiety than leaders with more control and power.

However, in a highly uncertain world with constant change, I believe few leaders feel like they have power and control anymore. Additionally, cortisol levels increase when dealing with acute stressors like high-stakes situations or crises.

So, I would assume that leaders in most organisations would have significantly increased cortisol levels in recent years. (I hope someone collected that data).

Yeah, yeah, what’s the big deal?

Unfortunately, the impact of stress hormones is a big deal. High cortisol has been linked to high blood pressure, stroke, anxiety, depression, obesity, diabetes, and immune system suppression.

This Swedish study shows how high cortisol levels — for example, due to “long-term stress exposure typical for modern societies” — were strongly linked to a higher risk of having a heart attack, with the risk being over five (!) times higher for individuals with elevated levels. We can compare this to smoking, which increases the risk of heart disease 2-4 times.

I don’t know about you, but if that statistic was well known, I believe most people would think twice about taking a job as a Fortune 500 CEO or political party leader.

I’m, of course, not saying this is the case for every organisation. Nor do I think having a leadership role has always been this challenging or will always be.

But at the moment, I think it plays a significant role.

What’s the silver lining?

If we design organisations and roles differently and intentionally, there are many ways to reduce stress on leaders (and employees).

As a concrete example, this week, I spoke with David Ekelund, the CEO of the shoe label Icebug. They are doing things very differently, and I recommend checking it out if you want inspiration.

But as you can tell, I’m thinking a lot about this. What can be changed in society at large? And how can I do things differently with Better Odds?

What do you think? I’d love to know.

Anna

  1. Podcast — This beautiful conversation between Ezra Klein and Rhaina Cohen provided a necessary reflection on friendships and chosen families. We are all so much better off when we let people into our lives, not just in romantic ways. (Yes, it’s a long episode, but it's worth listening to the end).

  2. Book — Continuing on the topic of friendships. Sheila Liming’s book Hanging Out - The Radical Power of Killing Time passionately argues for sharing unstructured offline time with our friends. We shouldn’t just spend time “doing” things; we should hang out more.

  3. Tool — Pitch.com is an alternative to PowerPoint and Keynote that my designer friend Anton recommended. I’ve been testing it for a while and recently decided we will use it for all Better Odds presentations (which are all currently made in Figma).

  4. Article — Read ‘Enshittification’ is coming for absolutely everything, by Cory Doctorow in the Financial Times. “We’re all living through a great enshittening, in which the services that matter to us, that we rely on, are turning into giant piles of shit.”

  5. Save — The cookie trend is trying hard to charm the Europeans, who seem much more sceptical than the North Americans. Maybe it’s because it feels insane to pay €4 for a cookie when we can bake them ourselves in about 30 minutes. Here’s a recipe for Japanese Miso Caramel Cookies.

Just coming in: In Pakistan’s presidential election, the party of jailed leader Imran Khan claims victory, but the military might have the final say about the results. Read The Guardian for the latest update.

Altman’s high-stakes gamble: a $7 trillion investment in semiconductor chips

Artificial Intelligence, Business

OpenAI's CEO, Sam Altman, is on a mission to shake up the AI world with a plan to raise $5-7 trillion for semiconductor manufacturing. This sum is comparable to the combined market cap of Apple and Microsoft or the combined GDPs of India and the United Kingdom. While questioned by analysts as a Musk-like attention move, Altman seeks to solve the critical shortage of AI chips essential for powering advanced technologies like OpenAI’s ChatGPT.

Why it matters: A shortage of semiconductor chips bottlenecks the global race for AI dominance; Altman’s new plan is to solve this challenge and ensure OpenAI's competitive edge in AI.

The strategy: Altman isn't going solo. Known as a skilled dealmaker, he's weaving a complex partnership web with investors, chip giants like TSMC, and even power providers. OpenAI will be a significant customer of the new factories, ensuring a steady demand for what promises to be a revolutionary uptick in chip production.

Backdrop: Amidst a global scramble for semiconductor dominance, the United States government has recently proposed a $5 billion investment in semiconductor R&D. This comes as companies like Nvidia struggle to meet the demand for their top-tier AI chips, underscoring the urgency of Altman's project.

Global interest: The project's scope is international. Altman is courting investors worldwide, including discussions with the United Arab Emirates's Sheikh Tahnoun bin Zayed al Nahyan and United States Commerce Secretary Gina Raimondo. It's a clear signal that the future of AI and chip manufacturing is not just a tech issue but a geopolitical one. However, any heavy investment from the UAE in the US semiconductor chip technology sector will likely initiate debates, especially concerning how it aligns or conflicts with the US's strategies for semiconductor production and AI innovation.

Yes, but: For all its ambition, the project's success is far from guaranteed. The semiconductor industry is notoriously complex and capital-intensive, and Altman's plan hinges on navigating this challenging landscape and forging unprecedented partnerships.

Bottom line: If successful, Altman's venture could secure OpenAI's chip needs and reshape the global semiconductor landscape. It's a high-stakes bet on the future of AI, with implications far beyond the tech industry.

Germany set to block EU's landmark sustainability law

Sustainability

Big picture: The European Council delayed Friday’s vote on the European Union's ambitious Corporate Sustainability Due Diligence Directive (CS3D) following Germany’s indication they would abstain from the vote. The delay indicates last-minute efforts to revive the proposal to get Germany on board. Still, it is considered unlikely that this move will succeed, and it is speculated that a vote may occur as soon as February 14.

Why it matters: The CS3D directive aims to extend corporate accountability for environmental and human rights impacts, a critical element of the EU's strategy to address climate change and enforce environmental, social, and governance (ESG) standards across businesses. Together with the Corporate Sustainability Reporting Directive (CSRD) it forms the backbone of the EU's push towards a sustainable economy, mandating ESG reporting obligations and expanding corporate liability. Germany's abstention signals a significant setback for the EU's comprehensive efforts to integrate sustainability into corporate governance.

Background: Since 2022, the EU has been crafting the CS3D, necessitating an approving response from the European Parliament, Commission, and Council. But concerns over the reporting burden on small and medium-sized enterprises (SMEs) have prompted calls for eased regulations, notably from Ursula von der Leyen, the European Commission President and former German Defense Minister.

The German shift: The resistance to the CS3D, mainly from Germany, contrasts with the country's pivotal role in shaping EU sustainability policies. Proposals to mitigate SMEs' reporting obligations and delays in implementing CSRD specifics until 2026 underscore the balancing act between regulatory aims and business practicalities.

What's next: The potential rejection of the CS3D by the Council, driven by Germany's abstention, casts uncertainty on the future of EU sustainability legislation. With the 2024 European Parliament election looming, shifts in political dynamics could further influence the trajectory of EU sustainability initiatives, highlighting the delicate interplay between environmental advocacy and economic considerations.

Tool protecting artworks from AI training models hits 250,000 downloads in five days

Artificial Intelligence, Legal

  • Nightshade, a tool designed to prevent AI companies from using artworks to train models without consent, has seen a massive interest in downloads, hitting 250,000 in just five days post-launch.

  • The tool, which distorts images at a pixel level, aims to make unauthorised AI training on artworks more difficult, nudging the industry towards a model where licensing artworks from creators is more attractive.

  • This surge in interest highlights a growing concern among artists globally about their works being used without permission to train AI models.

Why It Matters: Nightshade represents a significant step in the ongoing battle between artists and AI companies over the use of artworks. The tool's rapid adoption signals a strong desire within the artistic community to safeguard their intellectual property against unauthorized AI exploitation.

Backdrop: This development follows the success of Glaze, another tool by the same team that has amassed over 2.2 million downloads since April 2023. Glaze helps protect the unique stylistic elements of artists' work from being copied by AI.

What's Next: The team behind Nightshade and Glaze is now working on a combined tool that integrates both defensive and offensive measures against AI training on unlicensed images. Despite the complexity of using both tools together, the enthusiastic response from artists underscores a deep commitment to intellectual property protection.

Gender bias towards female CEOs behind significant drop in stock recommendations

Business, Equality

Gender equality in the workplace is increasingly under the microscope, and a new study about speech patterns' impact on market responses is challenging our perceptions of bias at the highest levels of corporate leadership.

Why it matters: A study by researchers from London Business School, University of Bergen, and Saïd Business School at the University of Oxford has unveiled a stark gender bias in the corporate world. It turns out that when female CEOs use "uptalk" — a rising intonation at sentence ends commonly used by women, including transwomen — their companies suffer immediate stock valuation drops. No similar effect is seen for male CEOs.

The big picture: Despite strides towards gender equality, the corporate ladder's top rungs still present a challenging climb for women. Focusing on earnings calls from US firms between 2011 and 2019, this research points to subtler forms of bias that affect even the most senior women leaders.

By the numbers: Analysis of over 1,500 CEOs and nearly 29,000 analyst recommendations showed a 12%-20% drop in stock recommendations for each percentage increase in female CEOs' uptalk. Post-call, stocks led by uptalk-using female CEOs saw an average return dip of -0.84%.

Between the lines: The study highlights "uptalk" as more than a speech pattern. In the context of female CEOs, it becomes a costly marker of gender bias, affecting perceptions of leadership ability and, by extension, company value.

What's next: This research underscores the need for a cultural and institutional shift towards recognising and valuing diverse leadership styles. It calls for greater awareness of unconscious biases and their substantial impacts, advocating for systemic changes that support and empower women in leadership positions, regardless of their speech patterns.

Tiktok only the fifth most used social media among young Americans, trailing far behind YouTube

Internet

A recent survey from the Pew Research Center shows how the social media habits of adults in the United States show a clear preference for YouTube and Facebook. While TikTok's user base has expanded significantly since 2021, the Chinese app still trails behind the most popular platforms in all age groups.

Key findings:

  • YouTube dominates: 83% of US adults use the platform, making it the most popular.

  • Facebook follows: With 68% usage among Americans, it remains a key player.

  • Instagram in third: Nearly half of the adults (47%) are on Instagram.

  • Emerging platforms: Pinterest, TikTok, LinkedIn, WhatsApp, and Snapchat see 27% to 35% usage rates, while Twitter (now "X") and Reddit are used by about 20%.

  • TikTok's rise: Usage jumped to 33% from 21% in 2021, marking significant growth. Still, young Americans ages 18 to 29 said they’d used YouTube (93%), Instagram (78%), and Facebook (67%) compared to TikTok (62%) in 2023.

Demographic trends:

  • Young adults lead: Platforms like Instagram, Snapchat, and TikTok are especially popular among those aged 18-29.

  • Age divides: While YouTube and Facebook are used across age groups, younger adults are far more likely to engage with multiple platforms.

  • Education and income influence: LinkedIn usage spikes among those with higher education, and Twitter sees more activity among higher-income households.

What's new:

  • BeReal's debut: 3% of adults report using the photo-based app launched in 2020.

The bottom line: Social media remains integral to our lives, with platforms like YouTube and Facebook broadly used by US adults. While TikTok shows remarkable growth, it still trails far behind the most popular platforms in all age groups.

Study Shows Large Language Models’ Role as Double-Edged Sword in Misinformation Wars

Artificial Intelligence, Disinformation

New research encapsulates the dual nature of Large Language Models (LLMs) in the digital age—both as a catalyst for misinformation campaigns and a potential tool to identify them. As technology evolves, so does the complexity of the cat-and-mouse game between malicious actors and those striving to maintain integrity in the digital sphere.

A Balanced Perspective: While LLMs present a formidable array of tools for manipulators, they also offer unprecedented opportunities for detection and defence. This underscores the importance of continuous innovation in countermeasures to stay ahead in the ongoing cat-and-mouse game of digital influence campaigns.

Key Concerns:

  • Evasion Tactics: The emergence of self-hosted open-source models enables adversaries to sidestep the security protocols of major platforms developed by giants like Anthropic, Google, and OpenAI, complicating efforts to moderate malicious content.

  • Language Concerns: Multi-lingual LLMs have simplified the process for foreign actors to masquerade as locals, enhancing the potency of synthetic media in influence operations.

  • Human-AI Collaboration: The article pinpoints that the immediate threat from operations blends human creativity with the efficiency of LLMs, significantly amplifying the scale and sophistication of influence campaigns.

A Glimmer of Hope:

  • Advancing Detection and Defence: Despite the challenges, there's a silver lining, as these technologies can also strengthen detection methods. Strategies such as leveraging internal generative models for better training data, identifying statistical anomalies, and developing specialised classifiers are on the rise, helping to counter the threats more effectively.

  • Technological and Legal Measures for Countering Misuse: Measures like Reinforcement Learning from Human Feedback (RLHF) offer hope since they can reduce the ability of generative AI models to generate harmful, biased or inappropriate content. These techniques, in combination with legal frameworks, are crucial in mitigating misuse and ensuring that AI advancements do not fall into the wrong hands.

The Twitter Bots are Sorry:

ChatGPT sometimes returns error messages when asked to produce offensive content. So, if spam network operators are not paying attention when using ChatGPT, these messages will appear in the spam they generate 


Thank you for reading. I hope you learned something new. ✹

PS. If you got this email from a friend, click here to subscribe.