FlightEye is an Android app that overlays live ADS-B aircraft positions onto your camera feed. Point your phone at the sky and it shows you what is up there — callsign, altitude, bearing. The UI is simple. The math underneath is not.
This post is about that math. Specifically: how do you take a latitude, longitude, and altitude from an API and turn it into a pixel coordinate on a phone screen? The answer involves four coordinate systems and a chain of transformations that have to be exactly right, or the overlay ends up pointing at the ground.
The pipeline
Every frame, FlightEye runs each visible aircraft through four steps:
Geodetic (lat, lon, alt)
→ ENU (East, North, Up)
→ Device frame (sensor axes)
→ Camera frame
→ Screen pixels (x, y)Each arrow is a different transformation. Get any one wrong and the whole thing breaks in a different, confusing way.
Step 1 — Geodetic to ENU
The API gives aircraft positions as latitude, longitude, and altitude. These are geodetic coordinates — not useful for drawing things on a screen because they are not Cartesian. You cannot subtract two of them and get a direction vector.
The first step is converting to ENU — East, North, Up — a local Cartesian frame centred on the user:
The East term has a cosine factor that the North term does not. Lines of longitude converge toward the poles — one degree of longitude near the equator is about 111 km, but near the Arctic it is much less. The cosine of the observer’s latitude scales it correctly.
Once you have ENU, bearing and elevation follow directly:
Step 2 — ENU to device frame
The phone knows its own orientation via the rotation vector sensor — a fusion of accelerometer, gyroscope, and magnetometer — which outputs a quaternion. Android converts this to a 3×3 rotation matrix .
The matrix maps device axes to world (ENU) axes. To go the other way you use the transpose:
In code this is dot products with the rows of :
val dX = R[0]*E + R[1]*N + R[2]*U
val dY = R[3]*E + R[4]*N + R[5]*U
val dZ = R[6]*E + R[7]*N + R[8]*UAndroid stores the matrix row-major in a FloatArray(9), so the indices map directly.
The sensor to use is TYPE_ROTATION_VECTOR, not TYPE_GAME_ROTATION_VECTOR. The game variant fuses only accelerometer and gyroscope — it has no magnetic reference and drifts in azimuth. For an AR overlay that has to agree with GPS-derived bearings, you need absolute orientation.
Step 3 — Device frame to camera frame
Android’s device coordinate system has pointing out of the screen toward the user’s face. The camera points the other way. So camera (forward) is device :
The negation on is not a hack. It is the fundamental difference between the sensor’s convention and the camera’s convention. An aircraft is behind the camera — and should not be drawn — when .
Step 4 — Perspective projection
With the aircraft in camera-frame coordinates, projection uses the pinhole camera model. By similar triangles, a point at projects to normalised device coordinates:
where and are the horizontal and vertical field of view. NDC lives in when the point is on screen. The division by is the perspective divide — objects twice as far appear half as large.
Converting NDC to pixels, given screen dimensions :
The is the Y-axis flip. Screen coordinates have at the top, increasing downward. Camera is up. They are opposite.
The bugs this math predicts
The pipeline structure tells you what each bug will look like before you run the code.
Forget the factor in the East component: east-west positions are correct near the equator and increasingly wrong at higher latitudes.
Pass a remapped matrix instead of the raw getRotationMatrixFromVector output: the overlay works at one compass heading and breaks when you rotate 180°. The remap is for extracting Euler angles for display only.
Forget the Y-axis flip: aircraft above the horizon appear below centre and vice versa.
Each failure mode is distinct. When something went wrong I could usually tell which transformation was broken just from the symptom.
On FOV accuracy
A 5° error in horizontal FOV shifts a distant aircraft’s projected position by roughly of the half-screen width. Noticeable for an aircraft 10 km away.
Android’s CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS gives the focal length. From that and sensor dimensions:
If you cannot access that, the practical default for a rear phone camera in portrait is , .
The implementation is in Kotlin with Jetpack Compose. The sensor callbacks, projection, and Canvas overlay all run on the same data — a StateFlow of aircraft positions that the ViewModel exposes and the composable observes.