“Add a feature”

In my Design and Prototyping class, we recently did an assignment called "Add a Feature". We were supposed to take an app or site we frequently use, find a competitor or two, and identify a feature that the competitor has. We would then sketch our take on that feature, if it were added to the original app, and wireframe it in Sketch.

As my original app, I chose Things. I’ve been a user, and fan, for years. I have also, however, tried OmniFocus, and know there’s a feature there that I would love to have in Things: sequential tasks.

The basic idea of sequential tasks is that you can set a project, or a group of tasks, to operate in parallel – the normal way, where you can see all the incomplete tasks – or in sequence. When they’re in sequence, you can only see the next incomplete task, not all of them.

Which would be very helpful to me, at the moment – I’m taking classes, and a lot of what’s going into Things at the moment is "do this reading, watch these lectures, then do this assignment." Except, of that sentence fragment, things doesn’t support the word ‘then’, so I see the whole list all at once. I’d rather only see the reading, then only see the lectures, then only see the assignment. Perfect candidate for the assignment.

So, first thing’s first: sketch it.

I kicked around a couple different ideas, but pretty quickly arrived at the conclusion that it should be integrated into Things’ ‘When’ menu.

The ‘When’ menu in Things. After opening it, you can either click to select an item, or begin typing, and use their natural language parser to choose a date.

The other question was how to display these in the list. The point, of course, was that sequential items wouldn’t show up in the ‘Anytime’ list, but they do still need to be visible in some circumstances – namely, when you click through to the project itself, future items still show up.

In the ‘Anytime’ list, that third item wouldn’t appear; you could find it either in ‘Upcoming,’ which is sorted by date, or within the containing project – which is where I took this screenshot.

I actually tried a couple variations – it’s at this point that, were I working for Cultured Code, I’d say “we should build both versions and do some testing to see which is better.” I’m not, though, so I just wireframed them both and turned in the result.

I’m fairly happy with the way I integrated it. Clicking, tapping, or typing “After” pulls up a second menu, where you can search for the item you want to attach to. Instead of thinking about it in terms of the project, the mental model is just “after x, I’ll do y.”

All told, I really enjoyed this exercise – it was the first wireframing I’ve ever done in Sketch, and it was neat to think about integrating a feature into something I use all the time. (And, hey, Cultured Code, if you’re reading this: feel free to use this idea, because I’d love to have the feature.)


“The Computer for the 21st Century”

I was given this paper to read the other day, and I thought it was fascinating. I didn’t really check the date it was published until I was partially through the paper, and found the whole thing to be still applicable to the modern day.
A few select quotes:

Pads are intended to be “scrap computers” (analogous to scrap paper) that can be grabbed and used anywhere; they have no individualized identity or importance.
Pads, in contrast, use a real desk. Spread many electronic pads around on the desk, just as you spread out papers. Have many tasks in front of you and use the pads as reminders. Go beyond the desk to drawers, shelves, coffee tables. Spread the many parts of the many tasks of the day out in front of you to fit both the task and the reach of your arms and eyes, rather than to fit the limitations of CRT glass-blowing. Someday pads may even be as small and light as actual paper, but meanwhile they can fulfill many more of paper’s functions than can computer screens.

On the death of the user interface:

Prototype tabs, pads and boards are just the beginning of ubiquitous computing. The real power of the concept comes not from any one of these devices; it emerges from the interaction of all of them. The hundreds of processors and displays are not a “user interface” like a mouse and windows, just a pleasant and effective “place” to get things done.

On ubiquitous software:

Today’s operating systems, like DOS and Unix, assume a relatively fixed configuration of hardware and software at their core. This makes sense for both mainframes and personal computers, because hardware or operating system software cannot reasonably be added without shutting down the machine. But in an embodied virtuality, local devices come and go, and depend upon the room and the people in it. New software for new devices may be needed at any time, and you’ll never be able to shut off everything in the room at once.

There’s some really interesting ideas in there. We’ve done one or two – some computer settings are mobile, if you stay within one ecosystem, and we’ve definitely got little screens on the ‘tab’ scale. But still not in the ubiquitous way they discuss – we’re still tethered to specific devices, rather than genericized hardware with software that adapts to the person using it.
With the advent of cloud computing, though, that seems more possible. I spent a couple days this week running software that simply couldn’t run on my laptop. Artificial intelligence of the flavor I’m researching this summer requires a lot of processing power, and GPUs meet that need quite nicely. My laptop does not have a GPU; it definitely doesn’t have 8 Titans. And yet, there I sat, in a classroom, running 8 Titans along at peak capacity, all from the comfort of my laptop.
We’re getting closer to some of the ideals that Weiser wrote about 25 years ago. It’ll be interesting to see where we go from here.
(Oh, and you should absolutely read the rest of the paper– there’s more interesting ideas in there that I didn’t pull out, and a nice narrative-style exploration of some of them at the end.)


Fullscreen with HTML5 and JavaScript

What’s this? Web apps don’t have fullscreen? FALSE!

With a new addition to WebKit and Firefox, you can use fullscreen as much as you want.

How? Click that handy little read more link to find out. (If you’re already in the full post view, GO READ THE REST OF THIS)