Yeah, Stockfish was the pioneer of using NNUE in Chess engines
The simplest way to describe a neuron, is a function. It takes in inputs, does some math with them, and outputs a number. Although, the math is much simpler than you might think!
Here's a very simple visualization of a neural network

Each circle represents a neuron, the gray boxes are a layer, and the dashed lines are a "weight" (or "connection"). A weight is represented by a number, which can be positive or negative, and is basically how much effect one neuron has on the next.
The neuron also has what's known as an "activation function" which is basically another math function, which takes in one input (the sum of all the inputs * the corresponding weights), and outputs one output. The activation function used in Maxwell, and alot of other engines, is called ReLU and looks like this:
If Input < 0 then Output = 0
otherwise Output = Input
So let's say I1=0.5, and the weight going from I1 to H1 is -0.2. To calculate the output for H1 would be: ReLU(0.5 * -0.2) -> ReLU(-0.1) and since the input is negative, ReLU will output 0.
So then for H2, we'll say the weight connecting it to I1 is 1.2. Then you get: ReLU(0.5 * 1.2) -> ReLU(0.6) -> 0.6
So now you go to the output layer, which is a little more complicated because there's more than one input neuron (there's 2, because we're moving from hidden to output) so I'll just calculate the top neuron. We'll say the weight from H1 to O1 is -0.9, and H2 to O1 is 0.3.
First you multiply the inputs by the corresponding weights, then you get the sum, and feed that into the activation function. So you get: ReLU(0 * -0.9 + 0.6 * 0.3) -> ReLU(0 + 0.18) -> ReLU(0.18) -> 0.18. So O1 is 0.18!
That was probably a little more than you asked for, but you can't really explain a neuron without explaining a whole network haha

Stockfish uses it too? 2100 elo gap is hard to imagine at high-rating chess
Could you explain simplistically what a neuron is?