Scott Viteri

Scott Viteri

PhD Candidate in Computer Science

Stanford University

Biography

I am a CS PhD candidate at the Center for Automated Reasoning at Stanford University, under the guidance of Prof. Clark Barrett. My researching revolves around producing informative and collaborative language models via reinforcement learning. My goal is to ensure that wisdom and intelligence arise in AIs at similar rates, unlike a broader trend in human societal history.

During my PhD journey, my focus has evolved from formal verification and programming languages to AI alignment, prompted by my belief that robust AI represents a substantial existential threat to humanity. Prior to this, I majored in computer science and electrical engineering at MIT, contributing to AI and robotics research. After MIT, I explored interactive theorem proving at CMU with Simon Dedeo. During this period, I published research on the application of abduction in mathematics in the Cognition journal. I am currently in the 4th year of my PhD, where I have engaged in various projects related to SMT solving and interactive theorem proving. I have also probed into ontology mapping as a technique for targeted neural network interpretability.

My primary character trait is curiosity, and I really love math.

Interests

  • AI Alignment
  • Large Language Models
  • Interactive Theorem Proving
  • Evolving Prosociality in LM’s
  • Geometric Algebra

Education

  • Ph.D. in Computer Science (emphasis in Artificial Intelligence), present

    Stanford University

  • B.S. in Computer Science and Electrical Engineering, 2018

    Massachusetts Institute of Technology

Publications

(2024). Markovian Agents for Informative Language Modeling. Arxiv.

PDF Code

(2022). Flexible Proof Production in an Industrial-Strength SMT Solver. IJCAR.

DOI

(2021). Epistemic Phase Transitions in Mathematical Proofs. Cognition.

DOI

Talks

Ontology Indentification and Utility Functions

Abstract: For STS 10SI – Intro to AI Alignment in Winter 2023 I talked eliciting latent knowledge, cooperative inverse reinforcement learning, and shard theory. First I introduce the ELK problem as formulated by Paul Christiano and I go through the main proposals and counterexamples from the ELK document.

Computation, Communication, and Ontology Maps

Abstract: How can we create intelligent systems that retain and expand into that which we find valuable in the universe? During this talk I will present my own thoughts based on ontology mapping, and I will communicate why I believe that mathematicians who think about systemic interactions are in an especially good place to answer this important question.

Blog Posts

Clifford Algebra and SageMath Tutorial

Kernel: SageMath 10.2Clifford Algebra TutorialThis notebook is a tutorial on how to use Clifford Algebras and SageMath. To run it, you will need either a local sage executable, or to run it on cocalc.

Deep Dream for Transformers

GPT2 Visualization GPT2 Model Structure

Joint Text-EEG Embeddings

Here is a proposal I pitched to the Emotiv EEG company, since they have a bunch of EEG data and infrastructure to pay customers to help with distributed data collection. They were interested but we ended up stopping short of having me work there for the summer.

Democratic AI Constitution: Round-Robin Debate and Synthesis

In this document, we propose a comprehensive and dynamic approach to address the challenges of creating a democratic constitution for AI systems. Recognizing the diversity and complexities of various demographic, political, and philosophical groups, our approach involves generating group-specific constitutions using OpenAI’s GPT-4 model.

Nature < Nurture for AIs

Let’s imagine a hypothetical scenario where an AI is somehow trained in a way that is analogous to a human childhood in all of the relevant ways. Maybe it has a loving mother and family, maybe it has to learn to play well with peers, and maybe it has to learn to integrate into a larger social context.