Archive for category Ranting & Ravings

GMO

Unclean

I have a modest proposal for the state legislature: I think we should pass a law that requires doctors to wear a little badge on their shirt that says “Unclean” if they fail to wash their hands at least 12 times per day. Washing your hands is good! It’s almost free! And any doctor who doesn’t comply with the law is obviously evil for failing to meet our arbitrary standards. Doctors will probably object to this because it’s silly and also because any doctor who doesn’t meet the requirement (regardless of their reason) will look really bad with an “Unclean” badge on their shirt, but that’s not really important because after all they’re evil (see above). It’s our right to have doctors with clean hands!

No Comments

Pied Beauty

Glory be to God for dappled thingsYellowstone in the morning For skies of couple-colour as a brinded cow;
For rose-moles all in stipple upon trout that swim;
Fresh firecoal chestnut-falls; finches’ wings’
Landscape plotted and pieced—fold, fallow, and plough;
And all trades, their gear and tackle and trim.

All things counter, original, spare, strange;
Whatever is fickle, freckled (who knows how?)
With swift, slow; sweet, sour; adazzle, dim;
He fathers forth whose beauty is past change;
Praise Him.

Pied Beauty by Gerard Manley Hopkins

No Comments

Interfaces for creating?

I’ve found this discussion a very interesting one… There’s a lot of conjecture out there that the iPad (and similar devices) shifts use of the Internet away from “creating” and towards “consumption”. To some extent, some of this seems obvious. Activities like music and video are clearly consumptive and these activities often are more convenient (and seem more of a probable use) for portable devices like the iPad. Also, in general, reading is quite easy with the iPad/Kindle but typing is harder than with a regular laptop or keyboard. I find myself definitely being a consumer far more on the iPad. Even with emails, I tend to read and mark for later handling far more with the iPad. On my desktop on the other hand, I tend to immediately reply to the emails that I can knock out in the next minute or two. I might look at pictures on my iPad but I definitely don’t any editing (although the Photoshop Mobile app is kind of neat for really simple tweaking)

So while I can agree with the observation that iPads and other smaller devices are currently being used for consumption vs. creation, I think that this may just be a phase. Computer users have used keyboards for a long time. In fact, the first keyboard appears to date to the 18th century and our current qwerty keyboard dating to 1873. In addition, the mouse, first created in 1963 but not in common use until the 1980’s is also ubiquitous in modern systems. One could argue that it’s a powerful device for manipulating interfaces, but I don’t think it’s the end-all of human-machine interfaces.

There will be something new. There always is. Touch-based computing has its strengths and weaknesses. There’s an almost nauseating volume of interfaces that can all be summarized as “sort of like the interface used in Minority Report“. With faster processors, better algorithms for processing inputs, etc. it simply seems a matter of time before a new breed of general purpose input devices will become standard.

How would you like to write code with this?Keyboard input (and to a slightly lesser degree computer mouse input) are currently preferred because they are precise. Learning to type is a relatively easy task and provides a very easy-to-control way of interfacing with systems. Using a mouse is trivial to learn although it is much slower to use for many tasks. Its strength is that it works very well in dealing with graphical environments that involve manipulation of elements that rely on eye-hand coordination. The combination of both in modern systems allows precise control when needed, and manipulation of complex interfaces when needed.

Touch input devices provide a more natural feel for the second type of interface, but not the first. Precise input is slow and painful The value gained is that the iPad and similar devices are instant-on devices that don’t require you to sit, position yourself, or even use both hands. A user gains speed, portability, and convenience but loses precision.

Two things really interest me in this area. The first is motion-based systems like (to some extent) the Wii and more importantly the Kinect. Both systems use the concept of movement (one with a controller you hold and the other by simply viewing the user themselves). The second is voice-based systems like Siri. There have been many voice-based systems previously, but Siri seems to have attained a more natural level of interaction that I think finally makes voice control more practical.

The interesting thing about both systems is that both approaches reduce precision in the system and attempt to get at underlying intent of the input. You can ask Siri “What’s the weather like”, “will it rain today”, or “Weather” and it will give the same response. The attempt is to map a number of inputs to the same output. It can handle heavy accents, variations in speed, pitch, and intonation and still give results that make sense. Kinect based systems are looking at standard or typical behavior and are all about averaging inputs to try to get an approximate value rather than working with precise values.

These new technologies can be leveraged in interesting ways. It’s clear that games that involve more physical activity are fun and interesting. It’s also clear that being able to speak to your phone to perform tasks that would take longer to do with the touch input saves time. But will anything ever replace the keyboard?

I don’t have a crystal ball, but I think the important thing is that touch input, voice input, and motion-based input are really not trying to solve that issue. All of these inputs are inherently less precise (just as a mouse is less precise than a keyboard). Although there are some very interesting efforts to use a Kinect to write code in Visual Studio, it seems more likely that at best, motion technology could replace only the mouse or replace the mouse for specialized types of manipulation. Speech seems to be a good way of performing out-of-band or contexual tasks (say for example you’re in the middle of a coding task and want to send the current file to a team mate for review without stopping what you’re doing and performing this task manually.

Rapid but precise input is what’s needed for devices like the iPad to shift the trend from consuming information to creating information. This could be accomplished by new types of one-handed keyboards (which have been attempted); I have a hard time seeing that we will be able to achieve precision with devices not controlled by the human hand. Another option is a radical change in the interfaces themselves. To give an example, instead of writing code using a complex written syntax like that in most modern languages, a special language could be developed that encapsulated the structure of the code but could be represented in a format that could be more easily parsed and understood audibly. Transitions like this have already taken place in languages like LabVIEW which attempts to represent programming code in a visual format vs. a a written syntax. I have a hard time picturing how this could be accomplished, but in theory, I can see that it may be a possibility. There will be naysayers. But there are naysayers now with regards to high-level languages which already abstract an enormous amount of “what really happens” from the user.

Any thoughts on input devices and human-computer interaction as it’s currently evolving?

 

, , , ,

1 Comment

IT Dept. vs. IT Consultant

I moonlight from my full-time gig as a Software Engineer by doing IT support. This ranges from desktop support, hardware upgrades, purchasing assistance, remote access and support issues, network setup, configuration, and administration, etc. Basically, overall I’m acting as an IT Department for small companies.

I think I’m mostly productive in this role. Small offices don’t have a lot of issues and to some extent users train themselves to accomplish routine IT-related tasks. The intersection of “IT Consultant” who typically charges a bit more per hour to be on call to resolve a wide range of issues and the role of “IT Guy” who is hired full-time to assist with any ongoing issues is interesting.

A consultant typically:

  • Is expensive on a per hour basis
  • Must perform constantly to avoid being replaced (which is usually trivial for his client to do
  • May not be available immediately since they have other clients, but can often be available FAIRLY quickly if your willing to pay even more
  • Is expected to resolve issues quickly and surgically

An IT employee:

  • Is cheap on a per hour basis
  • But is relatively difficult to fire and replace if performance becomes a problem
  • Is available immediately 5 days a week
  • Is expected to solve problems as they arise but with little motivation to finish them immediately

I believe that for small companies, the first option is almost always the best. One of the main reasons I believe is motivation. A consultant must perform constantly and consistently with every hour that they bill. They can be replaced easily in most circumstances, and therefore must “earn their keep”. In addition, although there are definitely some conveniences of having local IT staff, most small companies do not have enough work to keep their staff busy. If they do, it’s likely because processes are not streamlined. If a member of IT staff must do 4 hours of work every day just to keep systems running, they’re probably wasting time. An expensive consultant might cost 5 times as much per hour but could be tasked to automate the process once and for all. In many cases, the IT staff member may not have the training or experience to perform automation (often consultants tend to have more rounded experience in process, software development, architecture, etc.) and normally they wouldn’t have much motivation to automate a task that would make their own position no longer be needed.

For larger companies, the volume of IT work is likely sufficient to keep full-time staff busy even if they are constantly resolving old problems, and moving onto new problems. In a situation where a single contractor can no longer fill your IT needs, it’s likely much cheaper to hire full-time IT staff.

There are scumbag employees and scumbag contractors. One of the biggest things that I have always striven for as a consulting contractor is to work myself out of a job. Ideally, I should be fixing things to a point that my client will only call when they want new features, new technology, new ideas. If all I do is treat a chronic IT wound, I’m part of the problem. Obviously, some systems take constant maintenance. But if I’m working on one system and I see a new system on the market that resolves or reduces these ongoing maintenance issues I will always point my client in that direction. My goal is to provide professional services that improve my client’s bottom line — not my own. Ideally, I’d love long-term relationships that are lucrative to me as well. But I want my customers to be successful with me as a partner, not in spite of me.

Ok, this is starting to sound like a sales pitch.

To summarize, in general, I advocate for IT consultants for small businesses and IT departments for medium to large sized companies (or perhaps specialized small businesses that have unusually high levels of IT needs). I think it’s critical that your consultant is really, truly attempting to maximize your ROI, and that they’re hired and rewarded in a manner that’s consistent with this. The same applies for your employees — your own IT staff must have support from management so that they can make their own positions more efficient and be rewarded for it. Contractors get greedy with high retainers, and employees get lazy with routine work. All organizations, large and small need to be aware of this and prepare for it in advance.

, ,

1 Comment

George Orwell on Language

Posted many other places but I’ve always enjoyed this:

I am going to to translate a passage of good English into modern English of the worst sort. Here is a well-known verse from Ecclesiastes:

I returned and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favour to men of skill; but time and chance happeneth to them all.

Here it is in modern English:

Objective considerations of contemporary phenomena compels the conclusion that success or failure in competitive activities exhibits no tendency to be commensurate with innate capacity, but that a considerable element of the unpredictable must invariably be taken into account.

This is a parody, but not a very gross one.  […] It will be seen that I have not made a full translation. The beginning and ending of the sentence following the original meaning fairly closely, but in the middle the concrete illustrations–race, battle, bread–dissolve into the vague phrase “success or failure in competitive activities.” This had to be so, because no modern writer of the kind I am discussing–no one capable of using phrases like “objective considerations of contemporary phenomena”–would ever tabulate his thoughts in that precise and detailed way. The whole tendency of modern prose is away from concreteness. Now analyze these two sentences a little more closely. The first contains forty-nine words but only sixty syllables, and all its words are those of everyday life. The second contains thirty-eight words of ninety syllables: eighteen of its words are from Latin roots, and one from Greek. The first sentence contains six vivid images, and only one phrase (“time and chance”) that could be called vague. The second contains not a single fresh, arresting phrase, and in spite of its ninety syllables it gives only a shortened version of the meaning contained in the first. Yet without a doubt it is the second kind of sentence that is gaining ground in modern English. I do no want to exaggerate. This kind of writing is not yet universal, and outcrops of simplicity will occur and there in the worst-written page. Still, if you or I were told to write a few lines on the uncertainty of human fortunes, we should probably come much nearer to my imaginary sentence that to the one from Ecclesiastes.

From George Orwell, “Politics and the English Language”, 1946 — emphasis is mine.

Perhaps my brain is turning to mush a bit young, but I’m often quite baffled by modern writers who seem to intentionally be making language become liquid and amorphous. Even more troubling, I find that people often will point to something like the parody sentence above and be convinced that because of its technical use of language, it’s probably superior, and even more concrete. Loss of metaphor, use of highly specialized language, and tacking of rote phrases and clauses together results in a meaningless jumble of confusion.

See also: George Orwell on Writing

, , ,

No Comments

Twitter: A great way to complain

I was pleasantly surprised recently to find a practical use for Twitter. I’m no Luddite, but I rarely find a lot of value in Twitter that I don’t find elsewhere. I’m following a lot of tech writers/bloggers/developers and that can be good for keeping up with developing trends, but I digress.

The practical feature: Complaining.

We had a miserable experience at the local Red Robin recently (South Hill/Puyallup, WA). Dirty, long wait, poor waiter service, etc. I posted this on Twitter:

Just got back from #RedRobin — disgusting…. that place has really gone downhill. Too bad.

Notice the tag on RedRobin. I was surprised when I fairly promptly got a reply on Twitter:

@andrewflanagan Yikes! Can you please send us the details/location at [email protected]? Thanks for your help.

The beauty of this is that anyone searching on Twitter for RedRobin will find my tag and see my post and my rotten rating. I sent an email, they replied (CC’ing a huge number of Red Robin staff) and I was asked if I wanted to talk to the manager.

This is pretty good service. My blog (yes, this one) is not exactly all that busy and I could have posted here for weeks without anyone at Red Robin being aware of it (or even if they were aware, they wouldn’t care since it’s not exactly all that visible).

So Twitter gives you visibility. Not just to a company, but to that company’s customers. I suppose it’s a little bit more like picketing a store instead of sending a letter to the management (which is more like a blog entry).

I also recently had an issue getting approved for our Bizspark account with Microsoft (you get free software — essentially an MSDN subscription — as well as help with your start up). Again I complained, and again I got a quick response (which was very civil). Interestingly, when I followed up via email, I was asked (somewhat rudely I would say) to remove my complaining post from Twitter. I complied, since they did fix the problem, but I’m somewhat surprised by just how much visibility I got.

What are your thoughts? Will the visibility last? Any similar experiences using Twitter or other social networks?

, ,

No Comments

Death of a Monitor

I have two of these at my desk:

I had for a while (since 2006 or so) and was really pleased with it. Good contrast and color, HDMI interface, very snappy. The controls are very awkward, but after initially configuring it, it was good to go. I purchased a second monitor intending to have a perfect matching set. The second monitor turned out to be a slight hardware revision that include a curved bottom bezel and a number of…. features that were quite annoying. First of all, unlike the first monitor of the same model number, it will not display it’s “native” format of 1920×1200 so I’m stuck with it in 1920×1080. Not terrible, but awfully weird.

About a month ago, it suddenly started acting up. It would randomly just lose the signal and then a little later snap back on (almost like it had a loose connection). This condition went from slightly annoying to unusable within a week and then it stopped displaying at all at 1920×1080. I was able to get an image by reducing the resolution to 1280×768 but it had weird red overlays. My assumption was that it was failing and I bitterly unplugged it and went back to using one monitor. There went my productivity.

But thankfully, although I’m still mystified, the story has a good ending. I plugged my monitor back in a week ago and low and behold, the news of its death was greatly exagerated! It’s been working flawlessly ever since. I think it may have been driver related but I’m just thankful it’s back.

At my full-time gig, I also have dual monitors, although the overall resolution is a lot lower. I’ve found that it’s a huge time saver… One side is my code, and the other is the dev web site. The problem is that now I’m spoiled; I tried to work recently on my 1360×768 laptop and felt like I couldn’t see anything.

How about your setup? Do you use two monitors? If so, can you live without your second monitor once you’ve used it for a while?

Lifehacker has some great tips and links to tools for dual monitor setups.

, ,

3 Comments

No Way…

I think I actually got a snippet of Symbian code to work on the first attempt! This is a first… Maybe I’m actually getting the hang of this. I just find the whole “descriptor” concept very odd.

Anyway, all I was trying to do was replace all plus signs with spaces. I normally wrestle with descriptor nonsense for a while but this time, I got it on the first try!

_LIT(TestData, "THIS+IS+A+TEST");
HBufC* heapBuf = HBufC::NewLC(255);
*heapBuf = TestData;
TPtr pHeapBuf(heapBuf->Des());
while (heapBuf->Find(_L("+")) > 0)
{
pHeapBuf.Replace(heapBuf->Find(_L("+")), 1, _L(" "));
}
 
CleanupStack::PopAndDestroy(heapBuf); //Don't forget!

Bleh… stupid Symbian. Thank goodness I didn’t have to change the length of the descriptor…

, ,

No Comments

Variable Naming

Some in computer programming have insisted on using the prefix is for all boolean data types. I’ve been bumping against this lately. I think it’s silly. It’s a form of Hungarian notation which seems unnecessary considering that the compiler/interpreter in almost all cases will help us deal with type issues. For readabilities’ sake, wouldn’t it make sense to name something what it represents? For example, if the boolean variable represents the state of being done, I suppose isDone may be an OK name. But if it represents the state of something that may or may not have been done 3 years ago, a better name might be wasDone. What if we want special checking to take place if a flag is set? Should that flag be called isCheck? It seems silly — maybe shouldCheck would work. What if we’re talking about ownership or class relationship. isChild works as a name to indicate class relation, but hasChildren is a perfectly logical name to define the inverse relationship. I saw a few places that advocate the use of helping verbs (have, has, had, do, does, did, shall, should, will, would, may, might, can, could, and must) or verbs of being (am, is, are, was, were, be, being, and been) to prefix these names. This makes sense. However, we speak English and sometimes in English we drop helping verbs (think of did for example). Is something.didSucceed better or worse than something.succeeded? There are numerous similar examples.

I think in the end, naming variables to make them readable is much more important than following some convention. Or perhaps I should rephrase that: the convention we ought to follow is the convention of written English, not some tightly defined arbitrary subset.

, ,

1 Comment

Driving me Crazy

Road to America by Macindow

One thought for alleviating some of the tension in driving:

Imagine each driver on the road around you is your mother.

Really, people — it feels better to drive places and not get there as quickly as you can because you took the time to treat others with courtesy.

, ,

3 Comments