Hangar space at busier airports in the U.S. is scarce. Hiring people who are competent to move aircraft without smacking them into each other (creating “hangar rash,” potentially a multi-million dollar issue on a jet) is getting tougher every year (the American workforce skews to either white collar desk jobs or SSDI/Xbox, leaving a big shortage of competent blue collar workers). There are some innovative tug technologies that can substantially increase the density of packing (see mototok.com, for example).
How about this for an AI master’s thesis: put a few cameras in the ceiling of a big hangar and then provide assistance to humans driving the Mototok? Back in 2011 there was a company with a laser-based “warning system” for wall contact (article). But airplanes should not be hard to see in a well-lit hangar, especially from a ceiling-mounted vantage point. Why not a system that can run on a mobile phone and be suitable for use in a small hangar with, e.g., just two or three aircraft?
[Why is hangar space scarce? There are the usual permitting costs plus high construction costs that make it tough for the U.S. to build infrastructure quickly or cheaply. But mostly the scarcity is unique to airports. The public pays aviation fuel taxes that the FAA uses to build runways and taxiways, but the land around those federally-funded resources is controlled, usually, by a municipality. The municipality will restrict construction of FBOs and hangars to one or two chosen cronies, thus creating artificial scarcity for fuel and hangar space. Fuel prices may be boosted by 1.5-2X and hangar prices by 3X compared to nearby less busy airports. If a new FBO tries to get in, the incumbent(s) will lobby politicians (usually successfully) to prevent vacant land at the airport from being used for an aviation purpose.]
Packing aircraft efficiently into a hangar often requires components of one aircraft to pass over/under components of another: wings over low tails, or under T-tails, high wings over low wings, etc. So a ceiling-mounted vantage point alone wouldn’t be enough to do this without some knowledge of the height dimension of the objects. Are conflict algorithms good enough to understand stereoscopic views like this? Or a ceiling camera combined with a complete database of aircraft shapes?
39alpha: Great point, but there aren’t a lot of aircraft types and each example of a type should be dimensionally the same. So the over/under problem could be solved by giving the software a 3D model of every popular aircraft type.
Speaking of xbox, the xbox’s Kinect motion detector does the type of machine vision that would apply here. There is a hacking community, sometimes encouraged by Microsoft, that tries to repurpose the hardware for non-gaming applications.
You would not need a university. Just hangar space, cheap hardware from Gamestop, and a willing teacher.
The XBOX people are a better resource – gamify the parking and people will pay you to solve the problem.
https://www.theregister.co.uk/2000/04/17/playstation_2_exports/
It is telling that people that write “you would not need a university” think that a LIDAR system designed to work on a living room would scale up to a hangar size.
Now that I think about it why would you want the cameras to direct the “humans driving the tugs”? Why not just the tugs directly?
You are underestimating the ability of the available labor and overestimating AI. These AI solutions cannot be amortized by a few use cases.
On the issue of FBO leases, any airport accepting FAA grants agrees to covenants against restrictive leases. In practice at airports making profitable FBO markets, the big FBO companies decide among themselves who will lease where. They are much more effective at signalling than the individual airports are at limiting access. The practices you describe are more likely at rural airports that don’t attract the big FBO’s or require big capital investments.
@tiago
We just need kibbitzing from smart alecks like you to keep us straight.
: )
https://www.google.com/search?q=scaling+kinect+to+large+space&oq=scaling+kinect+to+large+space
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3304120/
I still think kinect would be useful to demonstrate proof of concept. Small model planes, instead of real ones, are likely superior in developing this system: collisions are less costly.
You are probably right that the final application requires better hardware.