How can we make better decisions when information is highly incomplete?
My work connects decision theory with practical questions about how people and AI systems should reason from incomplete information.
Read the blog series on evaluating AI decision-makers under uncertainty.
About & Research Focus
I'm a decision scientist with over a decade of experience building data systems, probabilistic models, and decision tools in the insurance industry. Before that, I was a philosophy professor, teaching and researching the foundations of decision theory, probability, and logic. My career has been shaped by a single question: how should decision makers—whether human or algorithmic—form beliefs, choose actions, and revise their thinking in light of evidence?
In my industry work, I focus on designing systems where human expertise and machine intelligence complement each other. This has included building Bayesian models that blend actuarial benchmarks with underwriter judgment, implementing monitoring systems that surface unexpected patterns for human review, and developing AI‑powered tools that augment—rather than replace—the judgment of claims adjusters and underwriters.
The projects featured here approach these same questions from a more foundational angle: evaluating decision makers (human and AI) against formal standards of rationality, developing frameworks for coherent belief and calibrated uncertainty, and building tooling to make alignment claims testable and experimentally verifiable.