If your result can be negative, use a signed integer. But in many cases you shouldn't: unsigned integers have well-defined behaviour, you should take advantage of that.
Also, while it's true that modern C has the stdint.h typedefs, the old types are still good. All of the standard library, and many libraries you use, use the old types, so this makes interaction with them more practical. Furthermore, it'd probably best to use sizes suited to your platform. A long will be 32-bit on a 32-bit system and 64-bit on a 64-bit system. You only need long long in some cases. You can't avoid the traditional C types for things like strings, either.
Really, stdint.h only matters if you're reading binary data or performance is ultra-important, IMO.
Signed integers have undefined overflow, but unsigned integers underflow really easily.
IMO unsigned integers shouldn't have a subtraction operator named '-'. The semantics are sufficiently different that a subtraction operation on unsigneds should stand out.
(I am not a fan of using unsigned integers for semantically non-negative numbers. They are not simply a non-negative bounded type; they are a different type of integer, with a different arithmetic not familiar to everyday maths. But this is a tough row to hoe in C. Other languages, like Java and C#, didn't inherit the same mistake.)
And if it's modern C, why not use the typedefs from stdint.h?