I'm not talking about getting bit by a radioactive spider or being given a green ring by some random aliens. I'm talking about 0,0. You may remember it from math class. We've got the vertical Y line, the horizontal X line and (on the example to the left) I've marked our origin 0,0. This makes a lot of sense for drawing out math problems on a piece of paper but it doesn't make a ton of sense for writing software.
Every pixel (or point) has a coordinate on a computer screen. Whenever something is placed anywhere it needs to know a couple things, how big it is (height, width) and where it should be placed. With every different operating system or graphics library someone needed to make the decision of how things will be laid out in the coordinate space.
An example (right) with our origin at the center shows 4 points on our graph. The point at -1,1 shows that it is at -1 x and +1 y. This is in what is called quadrant II. The dot in Quadrant I (1,1) is in both the positive X and Y parts of the graph. This quadrant is often what graphics libraries will use to lay out things in a rectangle (or a window, which for our purposes is just another rectangle). BTW the lower left is Quandrant III and the lower right IV. Why are they Quadrants I-IV? Just because some mathematician said so, that's why.
A simple example of practical usage of coordinate systems on a computer is Excel. The origin is A,1 and it progresses up in numbers as you go down or further on the alphabet as you go right. This lets you do things like add columns of numbers say by adding A1,A2 and A3 and putting the sum in A4.
Now if you noticed this doesn't follow our normal graph. It is an inverted Y. This is actually a common occurance. I definitely can say that I like that layout numbers are all positive.
Excel's coordinate system actually makes a lot of sense when you know that Windows has the origin at the top left.
Mac OS X puts the origin at the lower left. While iOS puts it in the upper left.
To make matters a little bit weirder there is a shared 2d graphics library called SpriteKit that works very well on both iOS and Mac OS X and it's origin is at lower left.
But wait, didn't I justsay that iOS's origin is in the top left corner? How could that be?
Whenever we place something inside a parent (like a window) it has local coordinates. In the example to the left it shows how Mac OS has a local coordinate system for the screen, and then one for the window. The leaf graphic is simply a 'child' of the window so it exists in the window's local coordinate system (On the web CSS describes it as 'relative' to a parent).
This allows us to have different coordinate systems encapsulated in windows or really any rectangles.
This brings me to the last topic, rectangles. Look under your seats. You get a rectangle, you get a rectangle, everybody gets a rectangle!!!!
Circles are rectangles. Lines are rectangles. Squares are rectangles. Sure it doesn't make too much sense at face value. Just remember that everything on a computer monitor is made up of square pixels. Everything is a rectangle. It is, because everything has a height, a width and an anchor point. With UIKit elements (iOS) we have a property called 'center' as well as a 'frame' that contains the height and width. Those salient bits of information allow the computer system to place the graphic or text or whatever in exactly the right spot.
Ok, let's toss a few wrenches into this. Depending on what you're developing for the anchor point might be in different positions. Also it can be changed, like in SpriteKit to great effect. In this case it might make a spaceship look more real because it pivots at the correct point when you animate it. I mentioned 'correct point' which brings up the topic of pixels vs points. Pixels are actual physical pixels on a screen and points are a coordinate system that maps onto those pixels. This allows us to lay out designs for an iphone that is 480x320 pixels (original iphone) and also a phone that is 960x640 (iphone 4) while still only using 480x320 points. That way we don't need to constantly branch our code to place items on a screen that is physically the same size. And that is the principle reason for using points is to map a physical size to independence of resolution. 'Retina' doubled pixels density but the system of points didn't increase the pixel complexity. This also is an important physical aspect of design for touch devices as it mentions in Apple's HIG that touch targets should be no smaller than 44x44 points.
I will not rotate around to explain the esoteric difference between frame and bounds on iOS/OS X but I suffice to say that they are important.
What this boils down to is that all computer systems have coordinate spaces and many of them differ in interesting ways. All elements on the screen need to have the salient height, width and an anchor point to be able to place them in exactly the right spot.