📖 Holacracy: The New Management System for a Rapidly Changing World by Brian J. Robertson (2015). If you wait a decade to read a book, you'll discover that all the major early adopters (Medium, Zappos) have already abandoned this newfangled philosophy.
📖 The Great Z'manim Debate: The History, The Science, And The Lomdus by Rabbi Ahron Notis (2022; via Harold Zazula). Solid introduction to the relevant astronomical terms and older and more modern astronomical models. Good discussion of the relevant halachot.
📖 Halakhic Positions of Rabbi Joseph B. Soloveitchik: Volume 8 by Aharon Ziegler (2020). Only two more to go until I finish the series.
📖 Outlive: The Science and Art of Longevity by Peter Attia & Bill Gifford (2023; via Isaac Selya). Assuming you want to have certain kinds of mobility when you're older, this book helps you work backwards to where you want to be today.
📖 The Education of an Idealist: A Memoir by Samantha Power (2019; via Tyler Cowen).
Some may interpret this book’s title as suggesting that I began with lofty dreams about how one person could make a difference, only to be “educated” by the brutish forces that I encountered. That is not the story that follows.
I mean, that's kinda what happens. She learns very quickly how to say the right things to get the things she wants.
📖 The E-Myth Revisited: Why Most Small Businesses Don't Work and What to Do About It by Michael E. Gerber (2009; via Shalev NessAiver). Inside you there are three wolves. The Technician just wants to do good work and thinks bottom up. The Entrepreneur looks ahead and thinks top down. The manager wants everything in a little box and hates change.
📝 Inside the AI boom that's transforming how consultants work at McKinsey, BCG, and Deloitte (Lakshmi Varanasi / Business Insider; via Moshe Kinderlehrer). I wonder how long this policy will last:
McKinsey consultants have access to ChatGPT with guardrails and cannot input client data into it.
📝 DOGE Days (Sahil Lavingia; via Tyler Cowen). In which Sahil discovers, like many before him, that you may have aspirations to write wonderful code to help the government, but you will be thwarted every step of the way.
📝 Saying Bye to Glitch (Pirijan Keth; via Simon Willison). I never used Glitch, but I remember when it was announced and thinking that it was very cool. Pirijan also describes how various FogCreek software was ahead of its time.
📝 Thoughts on thinking (Dustin Curtis). As the nature of work transitions, I expect to see lots of posts of this nature.
📝 How I taught my 3-year-old to read like a 9-year-old (Erik Hoel / The Intrinsic Perspective).
It’s all the advantages of an iPad, none of the guilt. You’ve unlocked infinite self-entertainment. Long drive? Bring a book. Or five. Roman toddles into restaurants clutching a book as a backup activity, and reads while waiting in boring lines. It’s also calming, and so helps with emotional regulation. Toddler energy descending rapidly into deviance? Go read a book! It’s a parenting cheat code. I don’t know if this alone justifies the hours spent, but it sure is one heck of a benefit.
📝 I really don’t like ChatGPT’s new memory dossier (Simon Willison). It's pretty detailed.
📝 LLM Memory (Grant Slatton). I like mini taxonomy of approaches to memory. I think this shows us that we don't yet have a really good model for our own memory.
📝 The Price of Remission (David Armstrong / ProPublica).
When I started taking the drug, I’d look at the smooth, cylindrical capsule in my hand and consider the fact I was about to swallow something that costs about the same as a new iPhone. A month’s supply, which arrives in an ordinary, orange-tinged plastic bottle, is the same price as a new Nissan Versa.
📝 Book Review: Selfish Reasons To Have More Kids (Scott Alexander / Astral Codex Ten). "Have you tried doing less?" is often a good question. I don't think I've been able to implement the advice (at all?).
📝 AI-Generated Law (Bruce Schneier). Soon: AI laws on the limits of AI laws.
📝 I don't like NumPy (Dynomight). This one is for everyone who has ever had to try axis=0
when doing an operation in numpy
. The points about indexing and broadcasting hit home hard. I'm looking forward to the API proposals.
castfit 0.1.2 #
With suggestions from Gemini and ChatGPT
castfit 0.1.2
is available. I previously asked both Gemini 2.5 Pro and ChatGPT o3 to review my previous release and provide suggestions. While implementing those suggestions I ended up simplifying several things and making certain interfaces more consistent.
To install castfit
:
# modern (recommended)
uv add castfit
# classic
python -m pip install castfit
Release Notes 0.1.2 - 2025-05-21T20:54:51Z #
Fixed
- #4:
get_args
on raw types - #5:
to_tuple
with too-long input - #18: handling of
types.UnionType
- #25: delinted using
pyrefly
- #27:
str
toint
conversion when the string has decimal places - #28:
float
todatetime
conversion; added UTC timezone - #29: handling an untyped default value in a class
Changed
- #14:
TypeForm
comment to clarify what we want - #19: set instance fields based on class metadata rather than tried to put all the data into the instance
- #22: register converters based on both source and destination types rather than assuming that each function must convert everything to a specific destination type
- #24: renamed
casts_to
tocasts
and added support for short-form (1 argument) cast function - #30: updated the public API to be more compact and consistent
Added
- #2: support for nested types
- #3: original cause of
to_type
error - #6: additional
datetime
formats - #7: custom casts to
castfit
(closes #7) - #11: more README examples
- #12: more complete docstrings for the public API
- #15: cache for fetching
get_origin
andget_args
information - #16:
DEFAULT_ENCODING
constant forto_bytes
andto_str
- #17: alternatives to README
- #20: infer types based on class field defaults
- #31: more negative tests
Removed
- #21:
castfit
on an instance instead of atype
Won't Fix
- #8: Gemini suggested having an explicit caster for
pathlib.Path
- #9: Gemini suggested having an explicit recursive dataclass/class casting
- #10: Gemini suggested optionally collecting errors instead of raising the first one encountered. It's a good idea, but not for now.
- #13: Tried implementing a workaround for
TypeGuard
in older versions of python, but it didn't work. - #23: Started and rolled back
is_callable
becausecastfit
can't currently do anything with a callable that is the wrong type. - #26: Rolled back having a
checks
parameter that overrides how types are checked. - #32: Tried fixing
TypeForm
to be the union oftype[T]
and_SpecialForm
, but onlypyright
was able to handle it.mypy
still can't handle it andty
isn't mature enough yet.
TIL: invariant, covariant, and contravariant type variables #
I avoided learning this for so long.
While working on castfit
, I ended up going down a rabbit hole learning about what makes something a subtype of something else. I vaguely knew the terms "invariant", "covariant", and "contravariant", but have very carefully avoided ever learning what they meant.
If you look at the Wikipedia entry on Covariance and contravariance (computer science) you see explanations like:
Suppose
A
andB
are types, andI<U>
denotes application of a type constructorI
with type argumentU
. Within the type system of a programming language, a typing rule for a type constructorI
is:
- covariant if it preserves the ordering of types (≤), which orders types from more specific to more generic: If
A ≤ B
, thenI<A> ≤ I<B>
;- contravariant if it reverses this ordering: If
A ≤ B
, thenI<B> ≤ I<A>
;- bivariant if both of these apply (i.e., if
A ≤ B
, thenI<A> ≡ I<B>
);[1]- variant if covariant, contravariant or bivariant;
- invariant or nonvariant if not variant.
Uh, super helpful. The introductory description is a lot better than it used to be, but talking it over with ChatGPT was actually the best way for me to understand what these terms mean.
Simpler Definitions #
Suppose we have a type hierarchy like:
Labrador → Dog → Animal
Siamese → Cat → Animal
The arrow indicates an "is a" relationship moving from more specific (narrow) to more general (broad): a Labrador is a type of Dog; a Dog is a type of Animal, etc.
The question of co/contra/invariance comes up when you want to know: Can I substitute one type for another?
It turns out that this depends on many things, but the situations are classified as:
- Invariant: It must be this exact type. No broader or narrower substitutions allowed.
- Covariant: The same or broader (more general) type is acceptable.
- Contravariant: The same or narrower (more specific) type is acceptable.
- Bivariant (haven't seen this in Python): The same, broader, or narrower type is acceptable.
This mostly comes into play when you start looking at containers of objects like list
, set
, tuple
, dict
, etc. Also Python has several exceptions to its own rules that are not immediately obvious.
General Python Rules #
- The contents of mutable containers are invariant. For example, if a function takes
list[Animal]
, you cannot passlist[Dog]
because that function might add aCat
(which is anAnimal
) there by violating the type safety oflist[Dog]
that was passed in.
An exception to this rule in Python is when "implicit conversion" occurs. A function that takes list[int]
is ok to take list[bool]
because there is an implicit conversion that happens. It seems that the numeric tower of bool → int → float → complex
all happens implicitly. There are a few other implicit conversions that I'm still learning about.
- The contents of immutable containers (or read-only situations) are covariant. A function that takes
Sequence[Animal]
is ok to takeSequence[Dog]
becauseSequence
is read-only (andAnimal
is broader thanDog
).
Not really an exception, but fun fact: fixed-length tuple
are covariant, while variable-length tuples are invariant.
PEP 483 has a nice example of contravariant types: Callable
. While it is covariant in its return type (Callable[[], Dog]
is a subtype of Callable[[], Animal]
), it is contravariant in its arguments (Callable[[Animal], None]
is a subtype of Callable[[Dog], None]
).
This leads to a good rule of thumb:
The example with
Callable
shows how to make more precise type annotations for functions: choose the most general type for every argument, and the most specific type for the return value.
It looks like PEP 695 made it into Python 3.12, so maybe reasoning about this will get easier in the future especially because it automatically infers the variance of the type variables.
Trying pyrefly, Meta's type checker #
I guess I like trying type checkers.
Meta announced that they are open sourcing their type checker, pyrefly
. It was easy enough to try:
uvx pyrefly check
On my tiny 640 lines of castfit
(which passes pyright
and mypy --strict
) I got 74 errors. Going through some of these was instructive.
==
is not supported betweentype[@_]
andtype[MyList]
This was in response to code like:
assert castfit.get_origin_type(MyList) == MyList
Good catch. That ==
should is
.
Can't apply arguments to non-class, got
LegacyImplicitTypeAlias[TypeForm, type[type[TypeVar(T, invariant)] | Any]]
[bad-specialization]
Not sure what this means or how to handle it.
Argument
Forall[T, (value: Any, kind: Unknown) -> bool]
is not assignable to parameter with type(Any, Unknown) -> bool
[bad-argument-type]
Imagine my confusion when I'm trying to figure out why this is not assignable. Must be that Unknown
can't be assigned to Unknown
. I might try this again in the future, but the error messages could definitely use some work.
🐦 Tom Johnson on AI-assisted Programming (Tom Johnson).
I just spent a week using Cursor and PyCharm's AI [Assistant], The goal was to take a CLI-based Python script and add a basic front end UI. By the end of the day yesterday, I felt like chewing aluminum foil would be more fun ...
Recommend reading the whole thread. There are ways I imagine asking the AI differently, but I completely get the pain being described.
📝 New paradigm for psychology just dropped (Adam Mastroianni). I queued up the The Mind in the Wheel by Slime Mold Time Mold, but haven't read it yet. Adam's point is that a paradigm has to explain which things do what and what rules they follow.
So let’s get clear: a paradigm is made out of units and rules. It says, “the part of the world I’m studying is made up of these entities, which can do these activities.”
He lays out three different kinds of research:
-
Naive research: run experiments without knowing anything about the units and rules.
-
Impressionistic research: make up words and study them.
The problem with this approach is that it gets you tangled up in things that don’t actually exist. What is “zest for life”? It is literally “how you respond to the Zest for Life Scale”. And what does the Zest for Life Scale measure? It measures...zest for life.
- Actual science research (my name): doing experiments where you can actually learn about the units and rules.
We’re not doing impressionistic research here, so we can’t just create control systems by fiat, the way you can create “zest for life” by creating a Zest for Life Scale. Instead, discovering the drives requires a new set of methodologies. You might start by noticing that people seem inexplicably driven to do some things (like play Candy Crush) or inexplicably not driven to do other things (like drink lemon juice when they’re suffering from scurvy, even though it would save their life). This could give you an inkling of what kind of drives exist. Then you could try to isolate one of those drives through methods like:
Prevention: If you stop someone from playing Candy Crush, what do they do instead?
Knockout: If you turn off the elements of Candy Crush one at a time—make it black and white, eliminate the scoring system, etc.—at what point do they no longer want to play?
Behavioral exhaustion (knockout in reverse): If you give people one component of Candy Crush at a time—maybe, categorizing things, earning points, seeing lots of colors, etc.—and let them do that as much as they want, do they still want to play Candy Crush afterward?
Adam also covers why neuroscience is the wrong level of abstraction for learning the things we need to learn.
📝 7 Phrases I use to make giving feedback easier for myself (Wes Kao). "I noticed" that "this is a great start" for giving feedback in a modern environment. "At the same time" going forward you could sound "even more" authentic because "I believe you were trying to avoid hard feelings, but it doesn't quite work because it uses canned phrases. I recommend trying having a conversation." "From what I've seen" canned phrases come across as fake. You're "already" giving feedback regularly which is fantastic. Now, I'd love for you to have more real conversations where you're clear about your expectations.
(Seriously, though, the ideas behind the phrases and examples are extremely realistic.)
📝 The Wrong Way to Motivate Your Kid (Russell Shaw / The Atlantic). The advice is to find an "island of competence" and help your kid build up from that. An implication is that we should help kids get competent at more things. Unfortunately, things are trending in the opposite direction.
📝 How to title your blog post or whatever (Dynomight). The advice is to name things in a way that quickly separates the people who'll love it from the people who hate it.
📝 The magic of software; or, what makes a good engineer also makes a good engineering organization (Moxie Marlinspike; via Changelog News). Nice essay about how vision is translated into engineering. Also discusses abstraction layers and how organizations are structured.
However, there are two ways of interacting with an abstraction layer: as shorthand for an understanding of what it is doing on your behalf, or as a black box. I think engineers are most capable and most effective when abstraction layers serve as shorthand rather than a black box.
📝 The Troubling Business of Bounty Hunting (Jeff Winkler / The New York Times; via Thinking About Things). I feel like my mind updated every movie plot that involves someone skipping bail or a bounty hunter to be much more terrifying.
📝 How much information is in DNA? (Dynomight). I knew about both Kolmogorov complexity (smallest program that produces an output) and Shannon information (probability of a sequence appearing in a large pool of sequences), but this was a cool way of combining the two concepts. Plus the craziness that is biology.
📝 What I've learned about writing AI apps so far (Laurie Voss). Currently, LLMs are good at transforming text into less text. This implies you should give them lots of context (drop the whole manual / book / text) and ask a question.
📝 Getting things "done" in large tech companies (Sean Goedecke). Short post about how to know when a project is done and how to make sure executives know what was done.