Richard Roth

I am a philosophy PhD student at NYU working on epistemology, language, and decision theory. You can email me at rhr8837@nyu.edu, and find a CV here.

Next year, I'll be a postdoc at MIT, and then an assistant professor at UofT.

Iteration and Preservation. Conditionally accepted at Mind. Manuscript.
    How are your opinions on a supposition related to your unconditional opinions? One simple answer is Material Coincidence: when you are not sure that not q, you are sure that p on the supposition that q just in case you are sure that either not q or p (◇q ⊃ (□qp ≡ □(qp))). I give a novel argument against Material Coincidence: given weak side-conditions, it entails the implausible claim that being sure implies being sure that you are sure (□p ⊃ □□p).
A paper on assertions of higher-order ignorance. Manuscript.
    You are higher-order ignorant when you fail to know whether you know something. I show that ordinary people sometimes say that they are higher-order ignorant, and use this to undermine the claim that knowing implies knowing that one knows (KK:p ⊃ □□p), and the thought that taking seriously the possibility that one does not know something raises the standards for the application of the word “know” to the point where one fails to satisfy them.
A paper on reliabilism and defeat, with Bar Luzon. Forthcoming in Oxford Studies in Epistemology.
    According to reliabilism, whether a belief is justified, or amounts to knowledge, depends on whether similar beliefs are, or would be, true. We consider familiar arguments that reliabilism is incompatible with defeat, and show how reliabilists can respond to them by understanding similarity in a flexible way. This alternative conception of similarity also allows reliabilists to provide a unified treatment of defeat, including higher-order defeat. However, it also over-generates justification when you learn but ignore excellent evidence for an otherwise unjustified belief. We argue that this problem is much harder, and calls for a structural revision of reliabilism. We propose what we take to be the best such revision.
A paper on accuracy and introspection.
    This paper is about two putative epistemic ideals, Probabilism and Introspection. Probabilism requires having credences that respect the axioms of probability; Introspection requires always being certain what your credences are. I argue that two well-known arguments for Probabilism, once carefully analyzed, either fail or generalize to Introspection. I take this to undermine the arguments. I then suggest that Probabilism is a rational ideal, but Introspection is not, because Probabilism is robust — still worth approximating under noise — but Introspection is fragile — only good to realize perfectly.
A paper on accuracy and epistemic modals, with Mikayla Kelley, Calum McNamara, and Snow Zhang.
    According to accuracy-first epistemology, epistemic norms for credences can be derived from considerations of accuracy, or “closeness to the truth”. The standard way of precisifying it leads to collapse results in cases involving epistemic modals. In particular, the accuracy-first view seems to say that you should assign the same credence to sentences A, ‘might A’, and ‘must A’. If you don't, then your credences are accuracy-dominanted, in the sense that there's some other system of credences that's more accurate than yours, no matter how the world turns out to be. We propose a modification of the accuracy-first framework which avoids such collapse results. All of the arguments which made the framework seem attractive in the first place are preserved, and some new desirable constraints derived, such as that you should assign zero credence to so-called epistemic contradictions — i.e., sentences of the form ‘A and might not A’ (or similar).
No epistemic discounts.
    I explore the combination of epistemic decision theory, which says that we must do what's best in expectation given our knowledge, with optimistic theories of knowledge, according to which we can know all sorts of things that aren't certain on our evidence. I argue that plausible optimistic theories of knowledge make knowledge defeasible and “know” heavily context-dependent in ways that cause problems for epistemic decision theory.
Counterfactualism for Belief and Desire. Handout.
    I defend Counterfactualism: You should believe what it would be best to believe, and desire what it would be best to desire. This may sound platitudinous, but it conflicts with influential alternatives — such as the view that you should believe what is in fact true (or likely true), and desire what is in fact good (or expectedly good). Drawing on self-undermining beliefs and desires, I show that Counterfactualism comes apart from and improves on these competing views. Its signature virtue is normative invariance: whether you should believe or desire something does not depend on whether you in fact believe or desire it. A common objection to views like Counterfactualism is that they supposedly license “epistemic bribes”, implying that you should believe an obvious falsehood when doing so would cause enough true beliefs. I argue that Counterfactualists can reject such bribes if they deny that an attitude is made better by causing other good attitudes. I conclude by sketching two implications of Counterfactualism, for Permissivism and for the fittingness theory of value.

Photo of Richard