The first time I needed to actually address Unicode was when I was writing a calculator app and wanted the proper division and multiplication symbols applied to the buttons. Python code was happy to use the forward slash and the asterisk for math operators, but I felt the average user might feel more at home with the symbols used in grade school. It took a few minutes of research to get the hang of what was going on and how it worked. First some history. Back in the day most text characters were based on the ASCII/ANSI standard, this mapped 127 western characters to a unique set of 7 bits. With another bit added, for a total of 255 bits, you can map the additional characters to any non-western characters that you may have come a crossed with the addition of a code sheet for these other characters. This gets clumsy, when someone doesn’t have the required code sheet or the wrong code sheet is used to map the characters. Unicode was designed to address this by mapping all the world’s different character to one of 65,535 unique character codes called a ‘code point’ made up of two bytes. Something like U+0041 was used to represent ‘A’. The code point is in hexadecimal, so it is 65 in decimal. Unicode gives each a character a unique code point, but it doesn’t say how this data is stored and this is where an encoding scheme comes into play.