In the most recent episode of our podcast, @cwebber and @emacsen take a walk down memory lane about 8 bit computers, DOS and Unix command lines and the way that the imagery and mystique of the terminal may be holding us back.

librelounge.org/episodes/episo

Follow

@librelounge great episode! I've been often thought it would be great to develop interpreters that allow users to enter commands and get responses in more natural language. The standard CLI was designed for a time when every byte of RAM and hard drive space was a precious resource and I definitely don't want to lose that sense of econony from computing, quite the opposite. But there's no harm in adding trainer wheel layers that make using CLI more like playing a MUD.
@cwebber @emacsen

@strypey @librelounge @cwebber I played around with developing natural language frontends to Unix commands in college.

The challenge is not developing a frontend, but keeping the expressiveness and composability when you do. This may be tied directly into the fact that the interchange format in Unix is just plain text being piped.

If the data structures were more sophisticated, we could elevate the "dialog" to be more rich as well.

@emacsen @strypey @librelounge @cwebber
I thought it might be nice to add a menu-based helper wrapper around GNU Coreutils. You start it and it prompts you for what you want to do along with offering keyword searches, then walks you through the options. When done, it prints the actual Coreutils command and then invokes it.

But I admit I'm not ambitious enough to write it.

@emacsen @strypey @librelounge @cwebber I was thinking about that while listening to the episode - such a frontend might also be feasible for voice interaction. If one command knew it was outputting files and the next command knew what files were... maybe voice controlled pipes would work?

@jfred @strypey @librelounge @cwebber The hard part isn't the pipes, it's the data processing bit.

If we look at existing voice systems, they're very closed. It's a calendar subsystem, or a music subssytem, or a wikipedia lookup, etc.

No one AFAIK (prove me wrong) is doing pipes.

I was briefly involved in a replacement for Yahoo Pipes back in the day, but the tech wasn't there yet.

@emacsen maybe I don't understand the problem. Here's a (possibly) related anecdote. When I was learning Te Reo Māori (native language of / NZ), I remember struggling to figure out how to chain simple clauses into complex relational sentences (not that I would have described it that way at the time ;) Then I realized that Māori doesn't have words like "with", it just uses "ki" (at, to, or towards) and "i" (from) to 'pipe' one clause into another. (1/2)
@jfred @librelounge @cwebber

@emacsen couldn't a translation system interpret all of English's many relational terms as variations on "to" and "from", treat both as a pipe, and put the command clauses on either side of the pipe depending on whether it's "to" or "from"? For example, a natural language command like 'list system processes and find gimp' could be translated as 'list process TO find gimp', which translates to 'ls -l | grep gimp'. (2/2)
@jfred @librelounge @cwebber

@strypey @jfred @librelounge @cwebber Yes, perhaps it could, but for things to be really interesting, the interactivity would look something like:

"Find all the pictures of my dog that I took from my vacation last year".

English isn't so linear.

@emacsen sure, there's a reason folks have turned to machine learning instead of manually coding mappings ;) But I still think a useful translation layer could be built up over time, for example, starting with just allowing more natural language words to be translated into "grep" or "cat", and making terminal spit out more verbose responses whether commands work or fail. In fact, I've seen evidence this is already happening, like command suggestions.
@jfred @librelounge @cwebber

@strypey
Such interfaces can work if (and only if) they're interactive. There must be unambiguous feedback.

The problem with ML is that there is no way to test it. It can produce wildly wrong results for certain inputs and you won't know until you present it that input. Feedback can catch those cases.

Visual feedback might be best because it can be nearly instant.
@cwebber @jfred @emacsen

@freakazoid just to clarify, I'm not proposing to use ML, just pointing out that the reason folks have headed that way is that natural language has myriad variations and trying to think of them all and code them is a nightmare. What I'm proposing is to use old fashioned mapping of this to that, like people do when designing natural language text interfaces for MUDs. Starting simple and building it up over time.
@emacsen @jfred @cwebber

Sign in to participate in the conversation
Mastodon - NZOSS

This Mastodon instance is provided gratis by the NZ Open Source Society for the benefit of everyone interested in their own freedom and sharing with others. Hosting is generously provided by Catalyst Cloud right here in Aotearoa New Zealand.