A computer executes binary instructions known as machine code. This very hard for humans to understand, and impossible for them to create complex programs in. Humans therefore work with "source code" which is a textual representation of the instructions. However the computer cannot understand this, and so a transformation step is needed. This is called compilation.
Compilation takes the source code, and applies a series of transformations to it to output the machine code. The machine code is packaged up in an object known as a "binary". The binary is just a file containing the machine code, but it will have a certain format depending on what it is for. We will come to these formats later.
The machine code that is produced is specific to the architecture that it was compiled for. This means that you can't take code that was compiled for i386 and run it on arm. Therefore the binaries are known as "machine-dependent". The opposite of this "machine-independent" refers to anything that is not specific to the architecture, so this will include things like documentation and shell scripts. Things like python and perl code are machine independent, but they will require eventual translation to machine code, the difference is that this is done at runtime, rather than having an explicit compilation step as you do for C code and similar.
There are in general two types of binaries. The first are programs, these are the things that you execute to run something. The second are shared libraries. These allow you to share code between two or more programs for doing common tasks, without having to waste space on having two copies. The compilation of programs and shared libraries differ, as do the requirements of the packages, but both do require a compilation step, and within that the same process is being performed.
Should we also talk about linking? It seems a little unnecessary.