Disney World shows that VR is pointless?

If you’re trying to save a few dollars, maybe a head-mounted display is a good idea. What if you don’t care about capital cost? Disney World has a lot of immersive simulators that don’t require any headgear for the park guests. They just project a virtual world on big curved screens.

What about for home use? Why not build a small room in a house with a curved screen that completely surrounds the player? Use whatever tricks they’re using at Disney to make the projection work, but with $100 LCD projectors instead of the super bright ones needed for the monster domes that hold hundreds of people simultaneously.

If you’ve got your head-mounted VR system on, you’re not going to be a great asset to the rest of the folks in an apartment or house. Why not declare that immersive gaming is an activity that happens in its own room? Maybe it costs $5,000 instead of $500 for the hardware, but people used to pay $5,000 for the then-new plasma TVs.

Readers: Would this be better or worse than the VR headsets?

Full post, including comments

What laptop for Senior Management?

Stop the Presses! My opinion has been asked for by another household member!

It is time for a new laptop for Senior Management. She is accustomed to Microsoft Windows and a 15″ screen. She does not like or want a touchscreen. She’ll be using it at home and in conference rooms at various pharma companies.

Surveying the laptop market I’m surprised at how little improvement there has been in price or specs in the past few years. This seems like truly a stalled industry. You have to pay about $500 minimum. You get a mechanical hard drive just like the 1957 IBM RAMAC. You get 8 GB of RAM, barely enough to ran a cleanly booted Windows 10 (welcome to Swap City!). How is this different than three years ago, for example?

Given that, despite a few trips back to Dell for hardware service and software reinstallation, my last laptop (Dell XPS 13) could never be made to sleep properly, I’m thinking that Dell shouldn’t be on the list of contenders.

The LG gram series seems interesting. Costco is selling one with 16 GB of RAM and a 512 GB SSD for $1300. They promise to support it for two years and take it back within 90 days if it fails the way that the Dell did. It weighs a minimal 2.4 lbs. and reviews say that the promised battery life is real (16+ hours).

Unlike Dell (and Apple?), LG does not plunge the unlucky buyer into a world of dongles. The specs include 3 legacy USB ports, one hip new USB-C port, and a HDMI output (perfect for the executive who needs to plug into a projector and doesn’t want to have to remember a dongle). Photographers will be stuck in dongle hell, however, because there is no SD card reader (only “Micro-SD”).

The LG site claims that the device has been tested for ruggedness and is stronger than its minimal weight would suggest. The only way in which this LG differs from Senior Management’s spec is in the provision of a touch screen (but she doesn’t have to use it!). And perhaps the screen resolution could be higher? But then we would say goodbye to the long battery life?

Readers: What do you think? Is there a better idea than this LG?

Related:

Full post, including comments

Virtual reality and augmented reality: the technologies of the future

Part of our Austin experience was visiting the virtual/augmented reality lab at Capital Factory. Folks there have decided that the best current VR hardware is the HTC Vive. They aren’t in love with the much-hyped Oculus, but have it available to demo.

We did a 3D drawing game, browsed around in Google Earth, and played a first-person space-themed shooting game with the Vive. With Oculus, I played Angry Birds.

The good news is that we didn’t get sick, even flying around in Google Earth. On the other hand, I would rather have just covered the walls with more TVs for a more immersive experience.

I asked the folks running the lab for their theory on why VR hasn’t caught on. They cited the cost, noting that a complete HTC Vive rig is about $600. Yet that’s nothing compared to what hardcore gamers spend.

Readers: What do you think? Is it fair to say that “VR/AR is the technology of the future, and always will be”?

Full post, including comments

Sizing a UPS for cable modem and router; market opportunity for a long duration low power UPS?

Things that our neighbors hate more than Donald Trump:

  • cell towers
  • underground power lines

Power failures are routine and, when they happen, we lose all communications capability (since a mobile phone won’t work inside the house and only barely works out in the yard).

I’m thinking it might be nice to back up our Verizon FiOS service, including the Internet. Then, in theory, we can at least use our landline and our smartphones or laptops that are charged.

A friend in town says that this is a fool’s errand: “when we had power failures, it turned out that the fiber switch on the street would go down.” On the other hand, this FiOS customer had 72 power outages with Internet in a 6-year period (great advertisement for U.S. infrastructure!).

I’m wondering how to size the UPS to run the latest ONT (corresponding to a cable modem) and VZ’s WiFi router. Verizon sells a ghetto backup battery system, just for the ONT (to run the landline for 24 hours), based on 12 D cell disposable batteries. Wikipedia says a D battery has 18 amp-hours of capacity at 1.5V, so the total of 12 would have 324 watt-hours?

If we assume that the WiFi router draws a similar amount, and will have both boxes plugged into a UPS, we therefore need a UPS with 650 watt-hours of battery? Add another 20 percent for the efficiency losses in converting from DC up to 120V AC down to DC, so now we need 800 watt-hours of battery inside the UPS to run for 24 hours?

It seems to be tough to find this information. UPS vendors spec them in volt-amps or watts and then bury the battery details. Also, maybe Verizon is selling its own thing because the appropriate product does not exist in the market? To get a beefy battery one needs to invest in crazy high max VA, which is irrelevant in this application. A $200 UPS rated at 1500 VA is backed by only two feeble $20 8.5 Ah 12V batteries (204 watt-hours; less than Verizon’s 12 D cells). We bought one to try out and it supplies the ONT and router for 2.5 hours, less than half as long as expected. The higher-capacity machines seem to be marketed as “generators” (without the generator!), e.g., this 412 Wh 11 lb. box for $550.

APC makes a box with a replaceable lithium ion battery for only about $71, which they say is intended to power routers, but it stores a pathetic 41 Wh. Lithium-ion is just not a sensible way to buy watt-hours, apparently.

Readers: Is there a market opportunity here? Apparently providing even the power of 12 D cells on a trickle-out basis is crazy expensive right now. How about a device that holds 24(!) D cell batteries and, in the event of a power failure, will supply power from those batteries to a router and ONT or cable modem? A brief interruption in the power supply is acceptable. Amazon sells D cell Energizer alkaline batteries for about $1 each, delivered. Instead of buying a $500 lith-ion battery that will be garbage after 3 years, just buy $24 of D cells every year or two.

Full post, including comments

New York Times tried to teach Americans how digital computers worked…

… back in 1967: “The Electronic Digital Computer: How It Started, How It Works and What It Does”:

Many men have played pivotal roles in the evolution of technologies important to computer science. Among them are the following:

George Boole (1815-1864), a British mathematician who developed the concepts of symbolic logic.

Norbert Wiener (1894-1964) of the Massachusetts Institute of Technology, whose Cybernetics systematized concepts of control and communications.

Alan M. Turing (1912-54), a British mathematician who formulated a definition of automatic machines, worked out a mathematical model of an elementary “universal computer and proved theoretically that it could be programmed to do any calculation that could be done by any automatic machine.

John von Neumann (1903-57), a Hungarian mathematician who came to the United States where he developed the concept of the stored program for digital computers.

Claude E. Shannon, now aged 50, of M.I.T, who defined the application of symbolic logic to electrical switching and is even better known for his basic work in information theory.

The journalists missed the Atanasoff–Berry computer (1937-1942), but it apparently didn’t become widely known until a 1967 patent lawsuit.

There is some real technical info:

The stored-program concept involves the storage of both commands and data in a dynamic memory system in which the commands as well as the data can be processed arithmetically. This gives the digital computer a high degree of flexibility that makes it distinct from Babbage’s image of the Analytical Engine.

Despite its size and complexity, a computer achieves its results by doing a relatively few basic things. It can add two numbers, multiply them, subtract one from the other or divide one by the other. It also can move or rearrange numbers and, among other things, compare two values and then take some pre-determined action in accordance with what it finds.

For all its transistor chips, magnetic cores, printed circuits, wires, lights and buttons, the computer must be told what to do and how. Once a properly functioning modern computer gets its instructions in the form of a properly detailed “program,” it controls itself automatically so that it responds accurately and at the right time in the step-by-step sequence required to solve a given problem.

Progress has enabled us to use more lines of code of JavaScript to format a web page than the pioneers used of assembly code to run a company:

Developing the software is a very expensive enterprise and frequently more troublesome than designing the actual “hardware”—the computer itself. As an illustration of what the software may involve, it is often necessary to specify 60,000 instructions or more for a large centralized inventory-control system.

Hardware:

A flip-flop is an electronic switch. It usually consists of two transistors arranged so that incoming pulses cause them to switch states alternately. One flips on when the other flops off, with the latter releasing a pulse in the process. Thus, multiple flip-flops can be connected to form a register in which binary counting is accomplished by means, of pulse triggers.

Stable two-state electronic devices like the flip-flop are admirably suited for processing the 0 and 1 elements of the binary-number system. This, in fact, helps to explain why the binary-number system is commonly used in computers.

Math:

In Boolean representation, the multiplication sign (x) means AND while the plus sign (+) means OR. A bar over any symbol means NOT. An affirmative statement like A can therefore be expressed negatively as Ā (NOT A).

Hardware again:

There are three basic types of gates: the OR, which passes data when the appropriate signal is present at any of its inputs; the AND, which passes data only when the same appropriate signals are present at all inputs; and the NOT, which turns a 1-signal into a 0-signal, and vice versa.

All operations in the computer take place in fixed time intervals measured by sections of a continuous train of pulses. These basic pulses are sometimes provided by timing marks on a rotating drum, but more frequently they are generated by a free-running electronic oscillator called the “clock.”

The clock-beat sets the fundamental machine rhythm and synchronizes the auxiliary generators inside the computer. In a way, the clock is like a spinning wheel of fortune to which has been affixed a tab that touches an outer ring of tabs in sequence as the wheel revolves. If a signal is present at an outer tab when the wheel tab gets there, the appropriate gate is opened.

Each time-interval represents a cycle during which the computer carries out part of its duties. One machine operation can be set up for the computer during an instruction cycle (I-time), for example, and then processed during an execution cycle (E-time).

Click through at the bottom to the scanned page and you will see flowcharts, circuit diagrams, and an instruction set.

The public used to be interested in this stuff, apparently!

Full post, including comments

Google shows that James Damore and Econ 101 were right?

James Damore, the Google Heretic, was cast out for saying that intelligent people who identify as “women” did not enjoy staring at a screen and typing out pages of boring C and Java code (while simultaneously wearing headphones and rubbing elbows with other nerds?).

Damore suggested that the programming job be reconfigured so that it would be more appealing to people identifying as women. Instead of doing that, Google fired him for his thoughtcrime.

If Damore were correct, Econ 101 would predict that women at Google would be getting paid more than men for doing the same job. Otherwise, why would they be there doing something that was distasteful to them?

“Google Finds It’s Underpaying Many Men as It Addresses Wage Equity” (nytimes):

When Google conducted a study recently to determine whether the company was underpaying women and members of minority groups, it found, to the surprise of just about everyone, that men were paid less money than women for doing similar work.

Doesn’t this tend to show that both Damore and Econ 101 are correct?

Related:

Full post, including comments

Less than a month to go before Google breaks hundreds of thousands of links all over the Internet

Google purchased Picasa, a super efficient photo editor that offered seamless integration with online publishing (e.g., you add a photo to an album on your desktop computer and it automatically gets pushed to the online version of the album). When they were pushing their Facebook competitor, Google+, they set it up so that Picasa created Google+ albums.

They wasted a huge amount of humanity’s time and effort by shutting down Picasa (previous post on the subject).

Now they’re going to waste millions of additional hours worldwide by breaking links to all of the Google+ albums that they had Picasa create. People will either have to edit a ton of links and/or, having arrived at a broken link, will have to start searching to see if they can find the content elsewhere.

Example: my review of an Antarctica cruise on the Ocean Diamond. It was so easy to publish the photos via Picasa that I just linked to the photo album from the HTML page. Now I will have to move the photos somewhere else, edit the HTML file, git push, git pull, etc. Then repeat for every other blog posting and web page that links to a Picasa-created album.

Maybe this is why Google has a corporate mission of making the world’s information accessible? They’re the primary force now in making information inaccessible?

Related:

Full post, including comments

NYT: Greedy employers love women; woke university professors hate them

“The Secret History of Women in Coding: Computer programming once had much better gender balance than it does today. What went wrong?” (NYT, February 13, 2019) describes a golden age of female nerddom from the 1950s through the mid-80s. Employers would recruit, train, and pay people who identified as women to write software in IBM 704 assembly language. They would even do this for applicants who identified as part of two victim groups (a “young black woman” is cited).

According to the newspaper, once it became conventional for programmers to get Computer Science degrees, the percentage of women choosing coding dropped:

If we want to pinpoint a moment when women began to be forced out of programming, we can look at one year: 1984.

Who was forcing them out, then? University professors and the environments that they set up! Women-hating CS faculty were apparently eager to send approximately half of the potential students, and the funding that would accompany them, into the arms of less sexist departments.

The women described in the article don’t support the narrative of the NYT. One left coding because she wanted to be a lawyer and had a successful four-decade career in the law (see Atlantic article below on how the availability of higher-paid and more prestigious work in, e.g., finance, medicine, law, and politics can draw women away from the last-resort jobs of engineer and computer programmer).

The comments are interesting. A young woman reading these would certainly choose some career other than programming. Women describes decades of misery and sexism in the cubicle plantations where they’ve been stuck (and now they don’t even get cubicles!). So we have the odd phenomenon of the NYT saying that they are passionate about pushing up the number of women in STEM while simultaneously writing articles that any rational young woman would interpret as a huge warning flag regarding a STEM career.

Alternative explanations are not considered by the journalist and editors. For example, the purported golden age of female coding ended just as programming changed character. The job of data scientist today is a lot more like what a “programmer” was doing in the 1970s.

Another alternative is that the golden age coincided with a time of maximum female economic insecurity. No-fault divorce was being rolled out, in which the husband could unilaterally shed the wife in favor of a younger sex partner. But post-divorce financial arrangements were subject to the whims of individual judges due to a lack of guidelines and precedent. Once known-in-advance rules were set up, a lot of married women concluded that they didn’t need to work (see the economic study by Voena cited in “Litigation, Alimony, and Child Support in the U.S. Economy”). Child support guidelines introduced in the 1980s made it more lucrative for a woman to have sex with an already-married dentist or doctor than to go to work as a software engineer (see Massachusetts family law, for example).

Nor does the Times consider why female-run profit-hungry employers don’t seek out women to hire, train, and exploit. Sheryl Sandberg runs Facebook and advertises her passion for the advancement of women. If there is a huge reservoir of female coding talent out there, why wouldn’t Facebook tap into it with an aptitude test and an in-house training program? The cost of training women to the standard of a BSCS is less than what Facebook is currently spending to recruit men. (Remember that most of the four years of a BSCS is spent doing stuff that doesn’t relate to being a software engineer. For one thing, a full two years is spent not being in school at all.) How about Epic Systems and its multi-billionaire founder who identifies as a woman? Why wouldn’t they save a ton of money by recruiting and training an all-female staff to relive the glorious days of the 1960s with their 1960s database technology? (Epic rejects the RDBMS!)

Finally, the Times doesn’t consider the apparent inconsistency between this article and the rest of their journalism. Capitalism is responsible for the evils of racism and sexism. Universities are where enlightenment prevails. Is it that CS professors are the exception to the general rule? And what’s their motivation? Why do they want to see the biology department get the fancy new building to accommodate all of the female students (now a majority on campus)?

Finally, the article is strong on its mischaracterization of what James Damore, the cast-out Google heretic, wrote. (Has anyone at any American newspaper actually read his infamous memo?)

Readers: What was your favorite comment on this piece? I like the ones that say that the waning of female nerddom was due to the high salaries that purportedly began to be paid to programmers (the BLS can’t find this! Programmers today get paid less, on average, than the women described in the article were getting way back when). None of these coastal elites ask why, if it is all about the Benjamins, the percentage of women is growing in the highest-paid fields, such as medicine and finance.

Related:

Full post, including comments

Best programming language for a tweenager to learn (Java vs. JavaScript vs. Python)

A Facebook Messenger exchange that might be useful to others … (edited out some of the shocking language!)

Parent 1: D [14 years old] is in Robotics and has to learn Java. Did they mean Java or Javascript? S [11 years old] wants to learn a language. Should it be Java or Python or Swift or something else?

Me: JavaScript and Node.js; Then he can do front end and server side.

Parent 1: [Professional programmer friend] said robots are not programmed in JavaScript.

Me: That will change.

Parent 2: Ok so no Java please. It is a formally much better language. But the simplest programs take pages and pages of code before one can do anything. The APIs are verbose beyond belief. I hate it. It is a language of an Enterprise software architect who doesn’t really code but costs $250,000 a year. Javascript ist truly a piece of s***. Inconsistent and dirty but the kids do not care – they can learn that quickly. There are a lot of good libraries for JS now. So I would agree with Philip that JS is a better choice. Python is probably close second, but no frontend then.

Parent 1: You said JS was s*** but then you said learn it?

Parent 2: I think Java will put him to sleep. And he needs to be able to tinker and experiment fast. You have to consider his age. My kids completed three programming courses in JS in Khan academy this summer. He should be spending as little time as possible on learning syntax. And as much time as possible on f-ing around with his code to learn design patterns so to speak. Like loops, how to find the largest/smallest number in an array using a loop. Without digging in documentation or using Max/min functions to set bounds on a variable that he is changing. JavaScript has an advantage that is has C-like syntax which is similar to that of Java. If he God forbid wants to learn it later.

Parent 1: So JS is s***and Java is worse?

Parent 2: Our general advice here is to learn one high level scripting language (Python or Perl, but everyone hates Perl now and one low level language like C++ or Java. But I just don’t think a teenager has patience to learn Java. I don’t think anyone who respects himself or herself as a programmer should build a career around JavaScript. But everyone has to know it.

Parent 1: Ok. Sounds like S should do python and D should do whatever his teacher says.

Parent 2: Consider courses available. The content and engagement in the course trumps language.

Parent 1: D has already started Java.

Parent 2: Ok, then Godspeed. Look up a hello world application in java. So, teacher what is a class. What does public mean. What is static void. This is seven chapters of a textbook just to say hello world. Including a f-ing array of strings as an argument. And a dot notation. What is System. What is out?

Parent 2: Teachers who start teaching anyone under 18 in Java are either idiots or are teaching a group of ultra-motivated MIT students Also try setting up an IDE and compiling this baby of a program. You will pull your hair out. Once it outputs hello world to console, your normal child will rightfully look at you in disbelief and think “who the f*** wants to do this every day”? Don’t get me wrong. My crawlers are written in Java. But it would be like watching a pornstar do an hour-long **** video, then trying it with your college girlfriend for the first time and wondering why it didn’t go the same way.

Parent 2: (Actually our crawlers are written in Kotlin, which is a script-like language built on top of java (compatible in both directions). The Russians developed it to make Java more bearable and increase the speed of development.)

me: Haskell if he wants to learn about computation, but JavaScript is the real world power due to libraries and APIs.

Parent 2: Perl IS still #1 in terms of libraries. Python and JS are catching up. Kotlin is like Python with Java power (which also has libraries for almost everything). Plus everyone has to know JavaScript. Python is cleaner and more logical, for sure. Their philosophy is anthetical to Perl: there should be only one way to do it. So they spend time fighting which function to keep. This is good for large socialist enterprises where everyone is a cog in the machine. So that mediocre programmers don’t get confused. I started teaching my kids Python and quickly ran out of energy. I then moved to Khan Academy and their JS based courses, which are about programming, and not JS per se. That was quite successful, but the difficulty accelerated very quickly and I needed to be behind them to give hints and challenge them at key junctions.

Parent 1: [another programmer friend] says Java is the new COBOL.\

Parent 2: Most computer nerds are wrong when it comes to how to teach programming. It has to be now taught just like mathematics: slowly, painfully, step by step to build foundations. Can’t get to cool or useful s*** quickly without several years of work.

Parent 2: in order to make a clone of Tinder you’d need to know: 1. app development for iOS. 2. HTTP server programming, 3. databases, 4. image storage and processing, 5. file I/O, 6. APIs. That’s at least two programming languages. SQL and a bunch of other s***.

Parent 2: One has to keep doing it. My buddy put his 2nd grader in front of Khan Academy and she went all the way to the end of the Javascript track. I asked him to test her after 6 months – she forgot nearly everything. She obviously retained concepts, but that was about it. That’s not surprising because adults are exactly the same way.

Parent 1: The thing is – they remember that they could do it. So it helps them later. I haven’t forgotten C programming because i did it for so many years. But I have forgotten iOS programming.

Full post, including comments