top of page
Selected Publications
M Duhne, A Mohebi, K Kim, L Pelattini, JD Berke (2024)
Proceedings of the National Academy of Sciences
Striatal acetylcholine and dopamine critically regulate movement, motivation, and reward-related learning. Pauses in cholinergic interneuron (CIN) firing are thought to coincide with dopamine pulses encoding reward prediction errors (RPE) to jointly enable synaptic plasticity. Here, we examine the firing of identified CINs during reward-guided decision-making in freely moving rats and compare this firing to dopamine release. Relationships between CINs, dopamine, and behavior varied strongly by subregion. In the dorsal–lateral striatum, a Go! cue evoked burst–pause CIN spiking, followed by a brief dopamine pulse that was unrelated to RPE. In the dorsal–medial striatum, this cue evoked only a CIN pause, that was curtailed by a movement-selective rebound in firing. Finally, in the ventral striatum, a reward cue evoked RPE-coding increases in both dopamine and CIN firing, without a consistent pause. Our results demonstrate a spatial and temporal dissociation between CIN pauses and dopamine RPE signals and will inform future models of striatal information processing under both normal and pathological conditions.
A Mohebi, W Wei, L Pelattini, K Kim, JD Berke (2024)
Nature Neuroscience
Animals make predictions to guide their behavior and update those predictions through experience. Transient increases in dopamine (DA) are thought to be critical signals for updating predictions. However, it is unclear how this mechanism handles a wide range of behavioral timescales—from seconds or less (for example, if singing a song) to potentially hours or more (for example, if hunting for food). Here we report that DA transients in distinct rat striatal subregions convey prediction errors based on distinct time horizons. DA dynamics systematically accelerated from ventral to dorsomedial to dorsolateral striatum, in the tempo of spontaneous fluctuations, the temporal integration of prior rewards and the discounting of future rewards. This spectrum of timescales for evaluative computations can help achieve efficient learning and adaptive motivation for a broad range of behaviors
TA Krausz, AE Comrie, AE Kahn, LM Frank, ND Daw, JD Berke (2023)
Neuron
Animals frequently make decisions based on expectations of future reward (‘‘values’’). Values are updated by ongoing experience: places and choices that result in reward are assigned greater value. Yet, the specific algorithms used by the brain for such credit assignment remain unclear. We monitored accumbens dopamine as rats foraged for rewards in a complex, changing environment. We observed brief dopamine pulses both at reward receipt (scaling with prediction error) and at novel path opportunities. Dopamine also ramped up as rats ran toward reward ports, in proportion to the value at each location. By examining the evolution of these dopamine place-value signals, we found evidence for two distinct update processes: progressive propagation of value along taken paths, as in temporal difference learning, and inference of value throughout the maze, using internal models. Our results demonstrate that within rich, naturalistic environments dopamine conveys place values that are updated via multiple, complementary learning algorithms.
A Mohebi, JR Pettibone, AA Hamid, JT Wong, LT Vinson, T Patriarchi, L Tian, RT Kennedy, JD Berke (2019)
Nature
The dopamine projection from ventral tegmental area (VTA) to nucleus accumbens (NAc) is critical for motivation to work for rewards and reward-driven learning. How dopamine supports both functions is unclear. Dopamine cell spiking can encode prediction errors, which are vital learning signals in computational theories of adaptive behaviour. By contrast, dopamine release ramps up as animals approach rewards, mirroring reward expectation. This mismatch might reflect differences in behavioural tasks, slower changes in dopamine cell spiking or spike-independent modulation of dopamine release. Here we compare spiking of identified VTA dopamine cells with NAc dopamine release in the same decision-making task. Cues that indicate an upcoming reward increased both spiking and release. However, NAc core dopamine release also covaried with dynamically evolving reward expectations, without corresponding changes in VTA dopamine cell spiking. Our results suggest a fundamental difference in how dopamine release is regulated to achieve distinct functions: broadcast burst signals promote learning, whereas local control drives motivation.
JD Berke (2018)
Nature Neuroscience
Dopamine is a critical modulator of both learning and motivation. This presents a problem: how can target cells know whether increased dopamine is a signal to learn or to move? It is often presumed that motivation involves slow (‘tonic’) dopamine changes, while fast (‘phasic’) dopamine fluctuations convey reward prediction errors for learning. Yet recent studies have shown that dopamine conveys motivational value and promotes movement even on subsecond timescales. Here I describe an alternative account of how dopamine regulates ongoing behavior. Dopamine release related to motivation is rapidly and locally sculpted by receptors on dopamine terminals, independently from dopamine cell firing. Target neurons abruptly switch between learning and performance modes, with striatal cholinergic interneurons providing one candidate switch mechanism. The behavioral impact of dopamine varies by subregion, but in each case dopamine provides a dynamic estimate of whether it is worth expending a limited internal resource, such as energy, attention, or time.
AA Hamid, JR Pettibone, OS Mabrouk, VL Hetrick, R Schmidt, CMV Weele, RT Kennedy, BJ Aragona, JD Berke (2015)
Nature Neuroscience
Dopamine cell firing can encode errors in reward prediction, providing a learning signal to guide future behavior. Yet dopamine is also a key modulator of motivation, invigorating current behavior. Existing theories propose that fast (phasic) dopamine fluctuations support learning, whereas much slower (tonic) dopamine changes are involved in motivation. We examined dopamine release in the nucleus accumbens across multiple time scales, using complementary microdialysis and voltammetric methods during adaptive decision-making. We found that minute-by-minute dopamine levels covaried with reward rate and motivational vigor. Second-by-second dopamine release encoded an estimate of temporally discounted future reward (a value function). Changing dopamine immediately altered willingness to work and reinforced preceding action choices by encoding temporal-difference reward prediction errors. Our results indicate that dopamine conveys a single, rapidly evolving decision variable, the available reward for investment of effort, which is employed for both learning and motivational functions.
​
bottom of page