View source for UI Improvements
From Openmoko
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page:
Template used on this page:
Return to UI Improvements.
You do not have permission to edit this page, for the following reasons:
You can view and copy the source of this page:
Template used on this page:
Return to UI Improvements.
Obviously the tools are in the wild to build interfaces that could rival (or better IMO) anything Apple comes up with. We just need to organize this stuff. This would need hardware that can support dynamic interfaces. I can help here, too. sean@openmoko.com
In fact, this place shall be dedicated to human-machine interactions improvements discussion;
Human-machine interaction can be separated into several aspects:
Multi-touch
Zooming user interfaces
Graphics
Science fiction
Links:
It's been said that having no multitouch screen allows less freedom for innovation. Maybe we could get something out of our touchscreen drivers.
Why ? Think of apple's scroll up/down feature on macbooks touchpads (which aren't multi touch, it's a clever driver hack, iscroll2):
To scroll, just place two fingers on your trackpad instead of one. Both fingers need to be placed next to each other horizontally (not vertically, the trackpad cannot detect that). Some people get better results with their finger spaced a little bit apart, while others prefer having the fingers right next to each other.
iScroll2 provides two scrolling modes: Linear and circular scrolling.
For linear scrolling, move the two fingers up/down or left/right in a straight line, respectively, to scroll in that direction.
Circular scrolling works in a way similar to the iPod's scroll wheel: Move the two fingers in a circle to scroll up or down, depending on whether you move in a clockwise or counterclockwise direction.
Maybe we can port/adapt/get inspiration from this macintosh driver.
When we want to navigate files, mp3s in an mp3 player, etc... Every control that the application needs is a button. What about looking at the polyhedrons? We could find one for each usage, with as many surrounding subzones that may be used as controls. Ex: you need 5 buttons, let's take a pentagon with 5 surrounding zones all around. That way, it's always optimized...
http://en.wikipedia.org/wiki/Polyhedra http://en.wikipedia.org/wiki/List_of_uniform_polyhedra
We can't improve the human-machine interface without knowing the strengths / weaknesses of our hardware; some of the weaknesses might turn out as exploitable features, some strengths as limiting constraints.
Question:
What exactly does the touchscreen see when you touch the screen with 2 fingers at the same time, when you move them, when you move only one of the 2, etc. I'm also interested in knowing how precise the touchscreen is (ex: refresh rate, possible pressure indication, ...)?
Answer:
Conclusions:
Question:
What does one see when sliding two fingers in parallel up(L,R)->down(L,R)?
Answer:
Question:
What does one see when narrowing two fingers in slide (=zoom effect on iphone)?
Answer:
It would be good to report what performance the current hardware allows:
Please report here your impressions.
If we want to add eye candy & usability to the UI (such as smooth realistic list scrolling, as seen in apple's iphone demo on contacts lists for instance), we'll need a physics engine, so that moves & animations aren't all linear.
The following aticle explains the Digital Physics term from the iPhone example.
The most used technique for calculating trajectories and systems of related geometrical objects seems to be verlet integration implementation; it is an alternative to Euler's integration method, using fast approximation.
We may have no need for such a mathematical method at first, but perhaps there are other use cases. For instance, it may be useful to gesture recognition (i'm not aware if existing gesture recognition engines measure speed, acceleration...).
ODE is an open source, high performance library for simulating rigid body dynamics. It is fully featured, stable, mature and platform independent with an easy to use C/C++ API. It has advanced joint types and integrated collision detection with friction. ODE is useful for simulating vehicles, objects in virtual reality environments and virtual creatures. It is currently used in many computer games, 3D authoring tools and simulation tools.
The akamaru library is the code behind kiba dock's fun and dynamic behaviour. It's dependencies are light (needs just GLib). It takes elasticity, friction, gravity into account.
If you want to take a quick look at the code: svn co http://svn.kiba-dock.org/akamaru/ akamaru
The only (AFAIK) application using this library is kiba-dock, a *fun* app launcher, but we may find another use to it in the future.
As suggested on the mailing list, it is mostly overkill for the uses we intend to have, but this library may be optimized already, the API can spare some time for too. Furthermore, Qui peut le plus, peut le moins.
There's an undergoing verlet integration implementation into the e17 project (by rephorm) see http://rephorm.com/news/tag/physics , so we may see some UI physics integration into e17 someday.
http://www.robertpenner.com/easing/
See the demo: implements non linear behaviour (actionscript), but may give inspiration
If we got it right, when touching the screen on a second place, the cursor oscillates between the two points depending on relative pressure distribution. Using averaging algorithms, we may have the opportunity to detect peculiar behaviours.
We need raw data (x,y,t) from the real hardware for the following behaviours:
When touching the screen with two fingers at the same time, we necessarily see the two points, or are able to extrapolate the position of the second one. This solution can add feature, but will probably be little erratic...
We may correct the "half distance" phenomenon on double touch: if double touch is detected, then assimilate the cursor as twice further than the first touch. It would allow finer control, but higher instability.
The double touch detection may be implemented in the driver itself, as well as stabilization.
The warping can be used in the 4 diagonals, plus the up/down/right/left cross:
---------------- ---------------- ---------------- ---------------- - 1 - - 1 2 - - 1 - - 2 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 2- - - - 2 - - 1 - ---------------- ---------------- ---------------- ----------------
It's not double touch, but two sequential presses with a short time in between (~0.5 s)
One nice idea for virtual input is finger-splash
Yet, optimization does not only apply to the plain one-letter-at-a-time input. We need some sort of T9 (dictionary-based input help). When typing a word, the first letter determines the next possible ones. Therefore, we may let disappear the less-probable following letters. Ex: type an L, there's no way an X follows...
Hints:
The most critical point is the initial disposition of the letters, before any letter is typed. We may also want to use horizontal two-parts keyboard (with the neo in hands like a psp..)
The hexinput concept is interesting. What about hiding the less probable letters and increasing the remaining ones during typing?
There are lots of possible GUI frameworks with various software architectures that could be used for OpenMoko.
GTA01 hardware uses GTK+/matchbox without hardware acceleration, and it's not enough: this is a first that a mobile Linux device has such a high DPI resolution. OpenGL ES compositing seems to have a bright future on embedded devices, because compositing seems to give natural zooming interfaces reality (at last!).
Considering recent changes in destkop applications, opengl has a definite future. For instance, the expose (be it apple's or beryl's) is a very interesting and usable feature. Using compositing allows the physics metaphore: the human brain doesn't like "gaps"/jumps (for instance while scrolling a text), it needs continuity, which can be provided by opengl. When you look at apple's iphone prototype, it's not just eye candy, it's maybe the most natural/human way of navigating, because it's sufficiently realistic for the brain to forget the non-physical nature of what's inside.
So, opengl hardware will be needed in a more or less distant hardware, for 100% fluid operation. Benchmarking will be needed to compare the different alternatives that are cited further.
Evas is a powerful and power sparing canvas drawing library. It can be OpenGL accelerated. Python/Ruby bindings are available in the "proto" e17 cvs folder.
Moved here
Clutter, an OpenedHand project, is an open source software library for creating fast, visually rich graphical user interfaces. The most obvious example of potential usage is in media center type applications.
Clutter uses OpenGL (and optionally OpenGL ES) for rendering but with an API which hides the underlying GL complexity from the developer. The Clutter API is intended to be easy to use, efficient and flexible.
It does integrate gstreamer (for easy media playback, even camera or mic inputs), allows pango text rendering, cairo graphics rendering. Provided bindings are python, C# and Ruby.
GTK off screen rendering is supposed to be on it's way; once it is here, there will be a possibility of using GTK apps directly within OpenGL apps as textures, which would lead to the possibility of creating a full OpenGL "application manager" (as well as media consuming app) with ZUI features.
Features:
An early demonstration of Graff, which is a lighweight high-performance graphics rendering library. http://www.mdk.org.pl/articles/2007/04/23/chapter-1-in-which-we-meet-graff
Be sure to check out this demo (scrolling list with inertia scrolling) http://files.mdk.am/demos/graff-demo-3.avi
Of course it will remind you of Apple iPhone's UI. But this one runs in software mode on Nokia N770&800 already. The most notable part of Graff seems to be the inertia and physics integration in general.
Fluendo's (the Gstreamer guys) Pigment is a Python library designed to easily build user interfaces with embedded multimedia. Its design allows to use it on several platforms, thanks to a plugin system allowing to choose the underlying graphical API. Pigment is the rendering engine of Elisa, the Fluendo Media Center project.
Features:
Benchmarking will be needed. We have therefore to define a std testing application that would allow to compare alternatives.
Some Clutter VS Pigment information: http://www.taimila.com/?q=node/14
Please add here any idea that seems of relevance.
[EDIT] Graff's inertia scrolling list example: http://files.mdk.am/demos/graff-demo-3.avi
Take an item list (ex: adress book), print it on a ribbon of paper, and glue it on a wheel (on the tire). You're looking in the front of it, so when you want to go from the A to Z, you touch the wheel and drag it up. When you let the wheel go, it goes furter, taken by it's inertia. Stop the wheel when you got your contact. Got the idea? That's why we may speak of an "infinite wheel", so that the surface is flat. For our case here, we always want to display square content, so the n-sided uniform prism analogy is mathematically more exact.
Important features:
We can add "parallel wheels", symbolizing different sorting methods. Slide long to the left / right to look at a different wheel = items organization.
Effect: scroll in an inverted/negated fashion (slide down = scroll up, slide up = scroll down)
When finger is released (i.e. touchscreen doesn't detect any press):
if (last_speed_seen > 0 ) then keep this speed and acceleration, with friction else stop scrolling
Scrolling here is seen as unidimensional, but can apply to bidimensional situations (ex: zoomed image) too
Having a scroll that isn't a 1:1 map to the user's action isn't hard. It's just an extra calculation in the scroll code.
<---- Where is the scroll code? :)
The best would be to add the feature for both finger and stylus scrolling.
TODO:
The same, but for the wheel. It can be very short to do: you don't have 1:1 anymore, but, for example, 1/4 wheel turn = 1 item. It's demultiplicated, but has inertia.
A discussion on the community list identified a desire to have the ability to switch the OpenMoko UI into "left-handed" mode.
The main problem is scrollbars, when they're on the right, dragging the scrollbar left handed results in your hand covering the screen so you can't see what you are doing. So having the option of scrollbars on the left would be useful.
I don't think the whole screen should be mirrored! There are some elements that should remain..like the main top bar with the status icons and such. Scrollbars are the main thing I can think of right now.
As discussed on community list:
If you hold down one finger and tap the other one, the mouse pops over and back again. If you keep your second finger touching, the cursor follows it. When you release it, cursor goes back to first finger position. This could be a way to set a bounding box or turn on the mode. So the second finger can do something like rotating around the first, or increase or lower the distance to the first.
* slide your right-hand finger down, it scrolls up * slide your right-hand finger up, it scrolls down * slide it left, next page/item * slide it right, previous page/item * do a circle: rotation * narrow towards the black circle: zoom - * go away: zoom +
The advantages of using simple origin-driven cursor warping as double touch detection criteria is that:
We need to emulate key presses. We need to work at a layer where we can get raw cursor coordinates. <---- X server layer?
Doable, but tricky...
Languages: |
English • العربية • Български • Česky • Dansk • Deutsch • Esperanto • Eesti • Español • فارسی • Suomi • Français • עברית • Magyar • Italiano • 한국어 • Nederlands • Norsk (bokmål) • Polski • Português • Română • Русский • Svenska • Slovenčina • Українська • 中文(中国大陆) • 中文(台灣) • Euskara • Català |