Archive for category Geekiness

Coming back online

It’s been a while. I kind of dislike the idea of posting my thoughts and musings on social media as I find it increasingly frustrating to even view, let alone contribute to. So instead I’m thinking I will start posting here again.

Random topics I’m thinking….family stuff, politics, technology, projects I’m interested in, and maybe, just maybe some writing. I’ve been away from writing for way too long and would love to resume that.

More to follow, I hope.

,

No Comments

Emma

Our family got Emma almost 6 years ago. She’s been an important part of the kids lives growing up and a great companion to me (especially when the kids are gone). Her 6th birthday is just around the corner — Halloween. We’re planning to do the traditional Emma birthday bacon-cake.

She’s been a good dog — really blossoming when we moved to our 5-acre property. She now has plenty of time to chase dear, rabbits, coyotes, and squirrels, but still would prefer to hang out with the humans or just take a nap. She’s starting to show a little gray this year but still mostly full of energy.

No Comments

On Mountains and Software

Way up in the hinterlands, miles from any other humans, two men faced each other on a rocky ridge covered in lava flows and avalanche lilies.

“It was clearly a suboptimal algorithm!” the taller man yelled, with his arms outraised.

“Everyone knows that premature optimization is the root of all evil” the short, stouter man parried. “If we had known all the details of the operational environment at the time that we had implemented the original code, we might have made different decisions, but ultimately, it doesn’t matter. The code worked and the solution was a good one — at least for the first iteration.”

“The first iteration?! What is with you and iterations of software releases? Why not spend the time to do it right on the first try. Now your code is out there and it’s ridiculously slow — the whole company looks bad when you deliver crap like this.”

The short man leaned against a large basalt pillar that rose almost unnaturally out of the mountain, like an ancient stella, erected to point out the glory of this lofty pinnacle. “First, make sure it works, then make it fast.” he said slowly. “Let me ask you this, do you think that the company would have looked better or worse if we had spent an extra 2 or 3 weeks on the initial release to make it faster but the whole thing had a fundamental logic error?”

The tall man writhed, “Why do you assume that we would have introduced logic errors? The code was pretty straightforward, the algorithm wasn’t that unique or novel. It was probably new in some sense, but really just a collection of existing algos that were assembled in a new way. What makes you think it was such a hard problem to begin with?”

“Because humans are imperfect. I’m not even saying that there aren’t any logic errors in the first iteration — err, the initial release, or whatever you want to call what I just wrapped up. What I am saying is that we’re building from a more structured base. We have good confidence that it works now due to the unit tests that we put in place. Sure, it’s slow, but we can always work to improve that speed; then we can regress over the unit tests to ensure that the underlying foundation is solid.”

The taller fellow shifted his feet on the hillside, lava rock crunching like broken glass under his boots. “I think those unit tests are a waste of time. Out of the two weeks you spent building the solution, you spent almost a full week on those. Some testing is great and all, but come on! You spent thousands of dollars on code our customer will never see!” He shifted again and this time a clatter of jagged rocks slid down the hillside and off the edge of a cliff about 30 feet below.

“Our customer doesn’t see the code for the unit tests, but it’s there for us to ensure that everything’s working. Tell me this, Bob: how would you have shown the customer that the code worked as they expected?”

“I would have let them watch it run — watch it digest data and then view the results! That seems kind of obvious… how else would they accept it. Sometimes I don’t get you’re thinking, Fred”.

Fred kicked at the base of the column he was leaning against, flattening out a section of gritty soil, flecked with chunks of lichens that had fallen from the monument he now rested against.

“The tests are nothing more than the reflection of the requirements. The function that the customer described in their requirements is accomplished and realized by the collection of the algorithms and code segments that we built and assembled. If each of the blocks of code that we assembled works perfectly with regards to its own inputs and outputs, and if the code segments are assembled in a way that logically generates the expected output from the input to the entire routine, we can safely say that we’ve tested the whole thing when all unit tests are green. Sure, it won’t take into account the details of how much they like it overall, the architectural “goodness” of the solution, or the way it looks, the fonts, the UI, or even the speed, as you’re harping about. But if you remember, the customer never gave us any non-functional requirements like what I’m talking about. What they asked for was an algorithm and that’s what we built.”

Bob put his hands on his hips. “So you’re saying that the customer isn’t going to like what you delivered, and you anticipated that, but you still delivered it?”

Fred sighed. “No, but I am saying that the customer maybe doesn’t know exactly what they wanted when we started the project.”

“Oh, so now you’re smarter than the customer? Sounds like typical engineer ego.”

Fred squatted in the hollow that he had shovelled out with his foot and leaned back against the softer, mossy base of the rock, facing north. “I may be smarter or I may not be. That’s not the issue. The issue is that my job is not just to give the customer exactly what they say or to try to tease every possible requirement out of them at the beginning. My objective is to make a customer delighted with the software that I’ve written for them. It should meet their business needs and it should be useful, and it should be pleasant to use. That’s true of all software — that’s true of all products. What my customer wants is to solve problems, and that’s why they agreed to pay me money.What I am saying is that right now, my customer has a piece of software in front of them that works. I haven’t said I’m done, I haven’t said that the software is perfect, I’ve merely presented them with a quick release and asked them to take a look and then talk with me about it.”

Bob dropped his arms from his sides and attempted to squat on the hillside, resulting in more lava rocks tumbling down into the ravine below. He finally settled with one foot pointed down the hill, and the other uncomfortably folded under him and pressed up against some unpleasantly sharp looking chunks of obsidian. “I just don’t get it. You want to talk with your customer about slow software that doesn’t look pretty and only runs in an ugly console. Don’t you think they’ll be pissed with what they see?”

“It depends. We talked before I started work on the project. I explained what I was planning to do. I explained that I write software in iterations. They may not be used to doing business this way, but I think that their expectations are exactly in line with what I’m giving them. They may doubt in the approach I’m taking, but I didn’t lie to them and I’m not delivering anything other than what I said I was planning to deliver. They know I’m a professional, and they expect me to do my job.”

Bob raised his eyebrows. “So you think they’ll be OK with this? Aren’t they just going to add a whole bunch of those requirements that you were talking about? Speed? How it looks? How easy it is to use?”

“Probably,” Fred shrugged. “It doesn’t really matter if they add those things now. Like I said, we have a solid base. If they want a different user interface, we’ll add that. If they want it faster, we’ll optimize it. What I’m happy about is that the code that I’ve just delivered actually will work. If they wanted to, they could start using it now.”

Bob started to raise a hand, but the whole swath of rocks below him began to slide. Fred grabbed a branch of a small and unhappy looking Douglas fir that had managed to sprout in the crack between the basalt column and the hillside, and hauled Bob up closer to him. “Watch your step there! That hillside isn’t stable at all!”

Bob muttered a thank you under his breath and sat down as close to Fred as he could comfortable position himself. Both men sat for a while, staring at across the hillside and down into the seemingly endless valleys and terrain beyond. The sun was beginning to set, and off to the west the sky was lit up with a brilliant gradient of reds, oranges, and pinks.

“Did you know that sunsets are pretty mostly because of the pollutants in the air?” Bob asked. “If it wasn’t for humans burning things, the sunsets wouldn’t be nearly as dramatic. I’ve heard that it’s the imperfections in the atmosphere that give sunsets like this so much colors and variance.”

Fred nodded. “I’ve heard that too — not sure if it’s true, but it sounds reasonable.”

The men fell silent again.

“It’s like switchbacks” Fred said suddenly. Bob looked up, uncertainty on his face. “What I mean,” continued Fred, “is that the iterative process I’m describing is like switchbacks on a mountain. You don’t just charge up or down the mountain. It’s too steep,. the footing isn’t sure. Better to progress from east to west, then west to east, then back again as you slowly gain elevation like we did coming up. It’s slower, but it’s more steady. Plus, if you’re talking about a path that you’ve never been on before, or that hasn’t been blazed by someone in the past at all, the movement back and forth gives additional perspective. As you traverse back and forth you get a better idea of the mountain you’re ascending. You can see whether your path is doomed to failure or if you can realistically reach the summit. Each switchback can feel slow and painful but you’re safe and you’re constantly able to see what the next steps are going to be. You’re constantly analyzing and re-analyzing while still making progress towards the top.”

Bob looked down. “I guess I can see something in what you’re saying. So you’re saying that in some ways, software engineers are like sherpas, leading their customers to the summit?”

“I like that,” Fred smiled. “It’s a lot like that. We know that our customers want a great experience, and we’re the professionals that know about how to get there. We don’t want to promise them a spectacular view only to find that they were hoping to see Mt. Rainier and we we’re leading them up a slope to see across the Puget Sound. We could ask them a million questions about every last detail before they begin in a questionnaire. Things like ‘do you like flora more than fauna?’ or ‘do you prefer trees to rocks?’. But for a lot of the people that climb these hills, they wouldn’t even be sure what you mean. What types of flowers are there? What types of trees? They need to experience a part before they know what they want in full. Best to give them something good soon and then tailor the rest of the hike based off of how they react to what you’ve given them.”

Bob laughed. “I never really thought of it like that. I guess I can buy that. What about all that stuff you were talking about unit tests in code? Does that have a parallel in your analogy?”

“Well, I guess all analogies are somewhat imperfect, but I see things like tests in this case to be the safety that you build in. You may end up crossing a particularly treacherous avalanche chute, like this one, you almost slipped down. It’s better to spend a few minutes either picking a really good path and marking it with a cairn. Or maybe you’re crossing a creek and you spend the time to stretch a rope across to make it a bit easier to pass through. You expect in all likelihood you’ll be back on this path at some point and you want to be sure that the path is safe and trustworthy. Unit tests are like that in a way. It’s not really wasted time. You end up safer the first time around, and also, you have more confidence when you return.”

Bob sighed. “So you think the customer will be happy?”

Fred picked himself up off the edge of the dry, dusty basalt block and stretched. “I think so, I really do.” A crepuscular pika poked his head out of a hole about 20 feet away for a few seconds, saw the men, and decided to call it a night. The sun was nearly down now and the two men had a bit of a trek back to their base camp in the valley below. “You never know though, some customers are never happy. Some people will never be content. Ultimately, you can’t please everyone.”

Bob scrambled up, brushing dirt and shards of rocks from his pants. “I guess not.” he said.

Fred turned and starting walking back down the rocky path that they had come up. He paused, as they passed the first switchback on their trip down. “You know, Bob, even though hikes aren’t always as much fun as I thought they might be, and even though you end up with blisters and scratches and maybe the mosquitoes are biting, I still enjoy them.”

Bob just smiled.

, , ,

No Comments

Branding

A part of Carl Jung’s contribution to the world of psychology, is his concept of “archetypes”. From Wikipedia:

In Jungian psychology, archetypes are highly developed elements of the collective unconscious. Being unconscious, the existence of archetypes can only be deduced indirectly by examining behavior, images, art, myths, religions, or dreams. Carl Jung understood archetypes as universal, archaic patterns and images that derive from the collective unconscious and are the psychic counterpart of instinct. They are inherited potentials which are actualized when they enter consciousness as images or manifest in behavior on interaction with the outside world. They are autonomous and hidden forms which are transformed once they enter consciousness and are given particular expression by individuals and their cultures.

Strictly speaking, Jungian archetypes refer to unclear underlying forms or the archetypes-as-such from which emerge images and motifs such as the mother, the child, the trickster, and the flood among others. It is history, culture and personal context that shape these manifest representations thereby giving them their specific content. These images and motifs are more precisely called archetypal images.

I read an interesting article a while back that talked about “personal brands” from this Jungian archetypal perspective.

It’s a very fascinating concept. These sorts of constructs are of course nothing more than categorizing or organizing observations into containers from which we generalize. However, I think it’s interesting to observe how truly some of the archetypes in the linked article are similar.

Fun stuff.

, , , ,

No Comments

Augmented Reality

The following is a not terribly organized set of ramblings that I had regarding augmented reality.

Just for the sake of defining what I’m talking about, Wikipedia refers to augmented reality as:

Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. As a result, the technology functions by enhancing one’s current perception of reality. By contrast, virtual reality replaces the real world with a simulated one. Augmentation is conventionally in real-time and in semantic context with environmental elements, such as sports scores on TV during a match. With the help of advanced AR technology (e.g. adding computer vision and object recognition) the information about the surrounding real world of the user becomes interactive and digitally manipulable. Artificial information about the environment and its objects can be overlaid on the real world.

Minority ReportThere are different types of augmented reality, for example, the interface that is used in the movie Minority Report is a sort of augmented reality interface. More recently, Google Glass was in the news quite a bit with it’s interesting wearable augmented reality device. Google’s product (and in general, all “wearable” augmented reality devices are more like what I’m thinking about in terms of this article today.

The odd thing is that although I like the idea of wearable “augmented reality” in many ways, it needs to be discrete enough that basically no one but me will know if I’m running it. Part of the issue with any of this tech is that it makes humans seem less human. People aren’t comfortable with talking to someone who has a visible camera aperture pointed at them. In reality, I think most people are aware that with the pervasive use of video surveillance cameras and other recording devices are already recording us on a regular basis (almost continuous basis depending on where we are) it’s not a whole lot different. There’s something personal about it though.

Let’s say I’m talking to a friend about how frustrated I am about another person. Maybe my boss, or my teacher, or a friend who I feel has wronged me or others. In these situations, we typically feel comfortable currently because we are in a private area and the discussion can’t be recorded without making it obvious. As we move forward into the future, this is going to be less of a sure thing. We will have to trust people to either not record when we ask or to keep their recordings private. This is new for us in the realm of private, face-to-face conversations but it’s not new elsewhere. I forward emails that complain or whine about other people and I assume that my friends will not forward it on. (Please note: I’m not justifying my whining about third-parties with other people. This is probably a bad habit that I should break. Regardless, I still do it sometimes.)

The example above that I gave is more about gossip than anything else, but the same could be said for pillow-talk (or “revenge porn” which is becoming a thing) or even things as mundane as business decisions in a company. In the near future, recording devices and other computing resources will be small enough as to be nearly undetectable. There has to be a cultural and technological etiquette established to deal with this properly. What I mean is that in some ways, this is about being polite and civil, as well as trusting and being trusted and at the same time it will likely mean the development of tech to disable, or at least detect, the presence of devices like this in situations where we don’t necessarily trust. We already have a work area that we can’t bring certain devices into. This works when it’s relatively obvious if you’re in violation. But I think we’ll see new tech that allows an area like our workplace to be ENFORCED.

Google Glass If I was more of a hardware guy, I’d be looking at a startup to do DETECTION tech for new hardware like this. Let companies like Google and Apple develop our new high-tech augmented reality devices (I can just see Apple marketing it as the iBall!). We’re going to need a way for companies and people alike to feel comfortable using it. It’s boring technology. Most people would be intrigued by an invisible augmented reality device because it adds value to their life (or they believe that it will). But a device that detects this same technology is more of a necessary purchase to protect yourself than anything else.

There are downsides to creating devices that are intending to identify or disable recording. For example, police or others who are actively abusing their power or authority do no want to have their deeds or words recorded despite the fact that the public should be keeping them accountable. But I still think that there’s some good money to be made in this market and I’m interested to see how it develops.

 

, , ,

No Comments

I’m most creative when I’m not

 

My new keyboardMy typical work day is a mostly mental exercise. Most of my work falls into these categories:

  • Pushing or pulling ideas between various people ( normally we call these conversations but that’s what it feels like)
  • Attempting to create abstractions of concrete examples
  • Attempting to make concrete examples of abstract ideas
  • Designing optimized interactions and processes for defined workflows
  • Implementing software applications that meet the design
  • Thinking of ways that the implementation can be broken or hacked
  • Outlining and documenting all of the above in a way that almost anyone can understand it

I’m a software engineer and that’s what I do. It’s almost entirely a mental exercise. There are very parts of the day where I just click through something or enter data over and over or do anything else where I can zone out. I think a lot of non-techies think that computers are all about following some steps in a process (e.g. “Push that button, then click here, then push the other button.”). I’m sure to offend someone here, but I’ll pick an example of a task that at first requires some mental exercise but then really doesn’t: assembling furniture from Ikea. So, when you first get a box and pull out the pieces you have to apply some thought. Hopefully you skim through the directions, make sure you have the parts that you expect and then start on page 1. You look at the pieces, look at the drawing and start working. Two points to make on this: first, the thought is pretty basic. You’ve been given everything you need to work, you just have to follow the steps. Second, if you buy more than one item (or if your Ikea furniture wears out like ours does) you may find yourself assembling the second unit without even referencing the book. You essentially have to be smart enough to identity what object is in your hand and what parts in front of you fit where on that part.

Software Development Life Cycle

Software engineering is almost never like this. To be sure there are processes and steps and flow in software. There is a typical software requirements lifecycle that is composed of Planning, Design, Development, Testing, Deployment, and Maintenance [although sometimes the order and categories are described differently, but it’s still basically the same thing] but this just a huge framework that outlines the overall process. In fact, in many software methodologies the lifecycle is intentionally NOT a straightforward process but rather an iterative one. You start with requirements, do some design, write some tests, write code, do more design, refine some requirements, write more code, etc. Most of the process is up to you. Gathering requirements for a software project can be aided by good tools and by good frameworks. However, it’s still up to smart people to ask good questions and translate sometimes misleading responses and then continue to ask questions that flesh out the real desired functionality. Software design has a number of tools and frameworks as well. Just as in real-world architecture, there are numerous design patterns that can be leveraged to achieve a solid and good design but no one but a skilled engineer can make the call on what design pattern is appropriate. We have great modeling tools like UML which can help in developing a [mostly] unambiguous representation of a software component but someone has to construct the diagrams. The coding or implementation itself is probably the least “mental” part of the work. Granted, there’s a large amount of material that a developer needs to understand how to implement a design in code but most of the material is available easily from reference books or more commonly, the Internet. Even so, a developer must spend time understanding the design and ensuring that the code does in fact implement it. It’s difficult to ensure that your implementation is in the line with the requirements so testing is necessary. Testing is hard. I’m not even for a moment suggesting that it is too hard and should be ignored. In fact, you can’t really separate testing from implementation since there’s no way of knowing if what you wrote works without testing. But good, solid testing can be quite difficult. Modern tools are wonderful for testing — it’s easy to write unit tests for code to verify that code does what it’s supposed to do. The tools make life easier but there’s still an enormous amount of thought that has to go into testing. Edge cases and complex use case scenarios are common; it’s often very hard to have good tests that adequately cover all the requirements rigorously. Often software is written that supports an infinite variability of input. In order to write adequate tests, a developer must have intimate knowledge of the requirements but also the language, frameworks, operating system and other parameters that are part of the software environment. The tests themselves are easy to implement. It’s knowing what we should test that’s hard. Without even going into deployment, maintenance, and other related tasks, it’s easy to see that most of this process is a thinking process. In fact, really good software can be nearly complete before any software has been written.

My BrainMy point of all this is to indicate that software is a lot of brain power. And more than just brain power, it requires a lot of creativity. A lot of software has been written in the last 60 years. New software isn’t just about reinventing existing software but making things work better than before. Perhaps this means finding a creative way to tweak more performance out of the same hardware, or maybe constructing a more efficient interface that humans can spend less time learning and more time using, or maybe finding a way to make disparate systems talk together smoothly and correctly. All of these are valuable but often we’re not trying to find a solution but to find the best solution. Requirements have many non-functional elements that describe constraints on the system. We might be building a system that searches an airline reservation system for available seats but our constraint might be that we need to retrieve this data from the current system in less than 1 second. That part gets tricky.

This wears me out mentally. I find that by the end of the day I rarely want to spend time on the computer (unless it’s doing something mindless). In fact, even on the weekends I greatly prefer activities that are manual. I enjoy do-it-yourself projects around the house that involve punching holes in walls and cutting down trees. That sort of thing is easy — it takes some minimal thought and care but it’s basically doing stuff not thinking about stuff. I rarely do work on evenings or weekends that’s very creative — I’ve struggled just to write on this blog let alone writing out some stories that I’ve had floating in my head for years. Any project around the house that requires a good amount of planning or preparation tends to not happen because my brain sees the looming creative task and tries to shut down.

So finally, one observation, and one idea.

In the last several weeks I’ve transitioned in my full-time work to doing very repetitive (even boring) work that’s much more akin to pushing buttons and clicking the mouse. There’s a large backlog of system configuration tasks that I’ve been working through. The work is easy in one sense. Almost all of it is well-defined and easily understood. It just takes time. Sometimes I have to chase down problems and troubleshoot things that aren’t working, but it’s mostly clicking, typing, waiting, and repeating. What I’ve observed is that after about 3 days of “time off” from the creative process of software engineering I had a large boost in creativity. During the day, in the evening, on the weekend — I suddenly was interested in doing creative tasks. My brain had realized that it wasn’t getting any action and started asking for attention. I spent a few late nights working on some tasks that I’ve wanted to do for a long time (including some software development for a personal project that’s been on the back burner for literally years). All of it felt not only good, but great. I felt like I had time to think through things and although it’s a little subjective, I think that the quality of my work was superior than it normally is. I’ve had more “ah ha” moments and more “outside the box” solutions than I normally do. It’s been wonderful.

This probably isn’t too much of a surprise to anyone who thinks about it. I’m sure there are studies and articles on the subject (I’ve seen some). However, I’m interested in the idea that there may be optimal ways of mixing work for professions that require large amounts of creativity. For example, it might make sense that the best activity for a group of software engineers isn’t to have them spend an extra day each week on a project of their choosing (although I think it’s better than nothing) but rather to have them spend time working on running cable in the corporate offices or doing rack wiring or adding components to computer cases. These are all worth-while activities and although they take some skill, most technically inclined people find these things to be fairly simple mentally (although not always physically). A company doesn’t normally pay the same salary to people who do these tasks, but I think it could provide real value to the company in the long run. By turning off the creative process for a little bit, software engineers could [perhaps] be much more creative when they come back to their regular work. I’m not sure if there have been attempts to do things like this in a company that employees software engineers, but I’m curious to hear from my readers on whether they think that the idea has any merit. Anecdotal evidence as well as real studies on the subject are all welcome!

No Comments

Interfaces for creating?

I’ve found this discussion a very interesting one… There’s a lot of conjecture out there that the iPad (and similar devices) shifts use of the Internet away from “creating” and towards “consumption”. To some extent, some of this seems obvious. Activities like music and video are clearly consumptive and these activities often are more convenient (and seem more of a probable use) for portable devices like the iPad. Also, in general, reading is quite easy with the iPad/Kindle but typing is harder than with a regular laptop or keyboard. I find myself definitely being a consumer far more on the iPad. Even with emails, I tend to read and mark for later handling far more with the iPad. On my desktop on the other hand, I tend to immediately reply to the emails that I can knock out in the next minute or two. I might look at pictures on my iPad but I definitely don’t any editing (although the Photoshop Mobile app is kind of neat for really simple tweaking)

So while I can agree with the observation that iPads and other smaller devices are currently being used for consumption vs. creation, I think that this may just be a phase. Computer users have used keyboards for a long time. In fact, the first keyboard appears to date to the 18th century and our current qwerty keyboard dating to 1873. In addition, the mouse, first created in 1963 but not in common use until the 1980’s is also ubiquitous in modern systems. One could argue that it’s a powerful device for manipulating interfaces, but I don’t think it’s the end-all of human-machine interfaces.

There will be something new. There always is. Touch-based computing has its strengths and weaknesses. There’s an almost nauseating volume of interfaces that can all be summarized as “sort of like the interface used in Minority Report“. With faster processors, better algorithms for processing inputs, etc. it simply seems a matter of time before a new breed of general purpose input devices will become standard.

How would you like to write code with this?Keyboard input (and to a slightly lesser degree computer mouse input) are currently preferred because they are precise. Learning to type is a relatively easy task and provides a very easy-to-control way of interfacing with systems. Using a mouse is trivial to learn although it is much slower to use for many tasks. Its strength is that it works very well in dealing with graphical environments that involve manipulation of elements that rely on eye-hand coordination. The combination of both in modern systems allows precise control when needed, and manipulation of complex interfaces when needed.

Touch input devices provide a more natural feel for the second type of interface, but not the first. Precise input is slow and painful The value gained is that the iPad and similar devices are instant-on devices that don’t require you to sit, position yourself, or even use both hands. A user gains speed, portability, and convenience but loses precision.

Two things really interest me in this area. The first is motion-based systems like (to some extent) the Wii and more importantly the Kinect. Both systems use the concept of movement (one with a controller you hold and the other by simply viewing the user themselves). The second is voice-based systems like Siri. There have been many voice-based systems previously, but Siri seems to have attained a more natural level of interaction that I think finally makes voice control more practical.

The interesting thing about both systems is that both approaches reduce precision in the system and attempt to get at underlying intent of the input. You can ask Siri “What’s the weather like”, “will it rain today”, or “Weather” and it will give the same response. The attempt is to map a number of inputs to the same output. It can handle heavy accents, variations in speed, pitch, and intonation and still give results that make sense. Kinect based systems are looking at standard or typical behavior and are all about averaging inputs to try to get an approximate value rather than working with precise values.

These new technologies can be leveraged in interesting ways. It’s clear that games that involve more physical activity are fun and interesting. It’s also clear that being able to speak to your phone to perform tasks that would take longer to do with the touch input saves time. But will anything ever replace the keyboard?

I don’t have a crystal ball, but I think the important thing is that touch input, voice input, and motion-based input are really not trying to solve that issue. All of these inputs are inherently less precise (just as a mouse is less precise than a keyboard). Although there are some very interesting efforts to use a Kinect to write code in Visual Studio, it seems more likely that at best, motion technology could replace only the mouse or replace the mouse for specialized types of manipulation. Speech seems to be a good way of performing out-of-band or contexual tasks (say for example you’re in the middle of a coding task and want to send the current file to a team mate for review without stopping what you’re doing and performing this task manually.

Rapid but precise input is what’s needed for devices like the iPad to shift the trend from consuming information to creating information. This could be accomplished by new types of one-handed keyboards (which have been attempted); I have a hard time seeing that we will be able to achieve precision with devices not controlled by the human hand. Another option is a radical change in the interfaces themselves. To give an example, instead of writing code using a complex written syntax like that in most modern languages, a special language could be developed that encapsulated the structure of the code but could be represented in a format that could be more easily parsed and understood audibly. Transitions like this have already taken place in languages like LabVIEW which attempts to represent programming code in a visual format vs. a a written syntax. I have a hard time picturing how this could be accomplished, but in theory, I can see that it may be a possibility. There will be naysayers. But there are naysayers now with regards to high-level languages which already abstract an enormous amount of “what really happens” from the user.

Any thoughts on input devices and human-computer interaction as it’s currently evolving?

 

, , , ,

1 Comment

An Exaltation of Larks


I was delighted to browse through this book recently. A fun read for anyone interested in language. It’s not a long or complex book, but really just a nugget about some interesting developments in English with plenty of historical anecdotes and references as well as an interesting list of collective nouns.

The at-first confusing title is simple an example of one of the collective nouns or “terms of venery” that come to us from Medieval hunting tradition. Other examples include a “Murder of Crows” and a “Gaggle of Geese”. These collective nouns were used by gentlemen at the time to refer to groups of animals. The terms themselves were somewhat a mix of an argot and an inside joke (as many of the terms are quite playful or imaginative).

The information is not particular practical or useful, but clued me into an aspect of Medieval life that I was previously unaware of.

 

No Comments

Car Audio/Automation

I’ve been sort of disappointed. We don’t have our promised flying cars yet. But in addition, some of the existing tech that we do have seems sadly lacking. In an era of iPhones, video chat,  Internet video streaming, integrated digital sound systems, etc. it’s quite frustrating to observe the current market for car audio devices.

My commute just recently went from a 44-miles-per-day to 130-miles-per-day and obviously, it’s nice to have something going in the background be that music, lectures, sermons, podcasts, or NPR (yes, I listen to NPR!). So I’ve been looking at upgrading from my stock 1998 Toyota Corolla radio with tape deck to something better.

I’m struggling.

This reminds me a lot of how I viewed the pre-Treo 600 cell phone market (although to me, even phones like the Treo were disappointing). You could pick from several hundred choices all of which appeared to be designed without any standardization, attention to detail, solid feel (that horrible crunchy plastic feel that was finally cured with the iPhone), etc. As I survey the current landscape for car audio systems, I’m sort of seeing the same thing.

What I’m frustrated with:

  1. HD Radio support — this is easy, but I hate being nickel-and-dime’d for an extra $80 to take the spiffy “HD Ready” unit to be an ACTUAL HD Radio. Let’s just make this standard.
  2. Auxiliary input — this is almost standard across the board but seems to have so many problems on many units. In many cases, it’s either a very difficult interface to navigate or really bad noise on the line. With my 12 year old stock unit, I can use a cassette adapter and get sounding audio in less than 5 seconds. Why are modern units worse?
  3. Overall interface bizarreness. Beauty is in the eye of the beholder, and user interfaces are hard to objectively rate, but very few “best practices” are ever followed in interface design for these units. Often there are confusing knobs, multiple buttons that appear to do conflicting things, and odd resets and menu navigation which means you have to press 14 buttons to switch to your iPod input.
  4. “Flavor of the week” interfaces. Come on people. iPods are neat, iPhones are neat, but don’t sell me a unit because it now supports Pandora ON the iPhone itself. The one advantage is that instead of punching input to the iPhone, you punch on the car audio unit. I’m not seeing justification to drop an extra $50+ on this.

What I’d really like to see is:

  1. Let’s be honest, I’d like to see Apple design an interface. They do this amazingly well. Some people may not love it (hey, everyone’s different) but it would reset the industry as the development and release of the iPod and iPhone did. The combination of simple interfaces, never being “far” from common tasks, and reasonably strong and durable hardware design would be simply amazing.
  2. Upgradable firmware. Everyone has wireless these days. Many if not most people could receive wireless in their garage. Even better, why not integrate 3G/4G into these units directly? If you have connectivity, it seems quite reasonable to allow new software interfaces, new protocols, new “apps” of some sort to be used. For that matter, why reinvent the wheel — let’s use iOS or Android as the OS for these devices. If an iPad can sell for $600 with free WiFi or a $30/month 3G subscription, surely a head unit could be at the same price point. Currently, many of these units are $1,000+ and from what I’ve seen, offer few if any of these benefits.
  3. Get standard — allow USB Bluetooth dongles to be used, allow WiFi USB dongles to expand simple systems, provide a web interface so you can use your laptop or home computer to configure settings and features.
  4. Related to the above, a true separation between hardware and software. I should be able to buy a unit and then buy 8 different navigation systems or audio players that all run on the same hardware. I don’t want to be stuck with some name-brand piece of junk “solution” that I can never upgrade or change.

It’s much easier to complain than to actually do research. I may have completely missed some models out there or companies who are actually moving in this direction. If so, please leave a comment with any details.

I know very little about Microsoft’s foray into this sort of thing. Mainly because from what I understand, their Sync technology is exclusive for Ford vehicles. It sounds cool, but it’s only a first step in my opinion. Voice control is great, but they seem to just be replaying the same paradigm of older systems with a few Microsoft-ish bells and whistles.

As a final note, I’ll just say that I like the stock units the most — high-end cars come with some pretty amazing units that are hard to beat so far as making the interface blend perfectly with the car itself. In addition, integrated Bluetooth that’s tidily hidden away, steering wheel volume control, etc. are all great features. And maybe it’s the presence of reasonable built-in units that’s hurting the development of this market. Unless a big name company cuts a deal with a major car maker, it seems unlikely that after-market sales would drive enough sales to warrant some serious investment in this technology.

Any thoughts?

, , ,

3 Comments

Privateer

So I’m listening to a new Pandora station this morning (Thanks Robert for the suggestion — good stuff). The music of the particular track that I’m listening to at the moment reminds me so much of an old MS-DOS game called Privateer by Origin Systems. I remember distinctly getting the game from my Dad in 1993 and loading all 7 or so diskettes into our ancient beast of a computer. After some tweaking to fix some memory issues, we finally got it up and running. The basic gameplay is as a pilot of [initially] a small ship, flying between planets, space stations, asteroids, and other bases while choosing to play as a trader, a mercenary, or whatever you chose.

This game was amazing. The graphics of course look awful now as I review the site. But the game play was incredible. The joystick took some skill to use effectively whether you chose life as a merchant or a gun for hire. Interaction and AI wasn’t great, but for the time, it was pretty good. I spent many hours playing the game and really enjoyed it. The music was “futuristic” synthesized music, probably not the best quality, but it always felt so fitting for the game. The game created an incredibly immersive world that sucked you in despite the relative simplicity compared to modern games like EVE Online. It was a good balance. I was inspired, intrigued, and entertained but not to the point that I forgot about reality altogether. Newer games definitely provide more depth than this old-time game, but I can’t afford to spend an average of 2.5 hours per day (which apparently is average for EVE Online players).

I did also play Freelancer (made by the same designer after Origin was acquired by Electronic Arts) but it just didn’t feel the same.

Anyone out there know or or can recommend any games like this that can balance a high level of fascination with a certain restraint that still encourages reality?

,

2 Comments