Book People Archive

Re: a few notes on this fine monday morning



lars said:
>    It is hard to get these things intuitive.
>    Most people aren't used to being able to
>    change the size of an image on the web.
>    Maybe it should change automatically
>    with Ctrl + and Ctrl - ? I personally think
>    that the zoom slider from http://maps.google.com/
>    is the best user interface, but it should probably
>    not be embedded so it obscures the page image.

these are excellent points, lars, thank you.

my aim here was to be as dirt-simple as possible
in the underlying code, so everyone can grasp it.
(at least once i clean up that underlying code!)

so fancy things like javascript tricks were out...

(although i would truly love my examples to be
receptive to keypresses, since a good e-book
will always let the user "turn the page" by using
the cursor-keys or the page-up/down keys...)

and truthfully, i didn't do any experimentation,
but just glommed onto something that worked.
i assume that i could've done something similar
using frames and/or tables, which would allow
much more intuitive resizing. for an upcoming
text-only 2-up facing-pages display, i use css
for the effect.   because it works and is simple...

the only important thing to me is the interface,
with text-on-one-side and scan-on-the-other.
in a very real sense, i'm wide open how it's done.
(and if someone develops some killer method,
please feel free to send me the page template.)

having said all that, though, this is just a demo,
and it's a demo aimed at something which users
(and you) are accustomed -- the web-browser...

but i still firmly believe that people will _not_
be reading e-books through their browsers.

the e-book experience in a browser is _bad_.
further, in large part it's simply _not_fixable_...

no, people will choose offline viewer-programs
(i've written a few of those applications myself!),
or specialized server-based scripts (those too!)
which run _in_ a browser but control the inteface,
so it is a waste of time honing a browser display...

i have an offline equivalent of this whole interface,
named "banana-cream", i will release "eventually"...
it works eleven times better than this online version,
making the work of digitizing a book go _very_ fast...

banana-cream runs "offline" on your own computer,
but it has cyberspace knowledge so it can go fetch
an image itself without working through a browser;
but the fact that it's a regular application means that
it has significantly better capabilities (e.g., in editing)
than you would find in a web-based application...

(banana-cream is also the app with the actual code
that creates the set of html-files _administering_ this
"continuous proofreading" interface in its web-form;
it uses the .zml "master" text-file to generate the files,
in a process that only takes a few seconds.   it's slick...)


>    What if all book page images were put
>    next to each other in one giant image,
>    that you could zoom and pan just like Google Maps?
>    Like a digital microfilm/microfiche of the book.

well, i've got an example like that, somewhere.

the problem with that is, sans ajax, it's clumsy,
since it's very slow to load, even with a fat pipe.

and ajax jacks up code complexity immensely...

again, with an offline solution, where the images
are sitting on the user's hard-disk, it _is_ workable.
but it doesn't add much value to the user-interface.
you wanna read a book, not do a flyover on its pages.

(however, the first time i saw a book like this, i admit
that the idea of it was mind-boggling, for "fun" value.
i happened to be in san jose, and happened to go to
the computer museum there, which happened to have
an exhibit from parc on electronic-books, my thing...
the various takes, one of which was this "flyover" idea,
were tremendously stimulating and invigorating to me!
the day was one of the best days in my life, no kidding.)

there are some "philosophical" concerns at work too...

i firmly believe that we need to spend bandwidth wisely,
and avoid paths which depend too heavily upon its use.
even today, half the u.s. users do _not_ have a fat pipe;
they will simply be left out if we "assume" they have one.
personally, i don't care to contribute to the digital divide.

and i suspect it won't be long before the corporations
cut us off -- via a heavy fee for bandwith usage that is
designed to ensure they retain "competitive advantage"
against any "upstart" independents -- whereupon we
will have to re-learn how to live with limited bandwidth,
at least those of us who want to "send", not just "receive".
(you can bet corporations will coddle their "consumers".)

my philosophy is that a user should only have to fetch
a page-image _one_time_, and save it _intelligently_,
so the next time it's required, the _local_ copy is used...

not only will this minimize the bandwidth required,
it also has the "lockss" effect of replicating the book
in a large number of places, so that it can not be lost.
or suppressed, censored, or removed from existence.

frankly, it's ridiculous to re-fetch a page every time you
want to look at it.   don't get me wrong, it's _convenient_
to have everything on the web, so you can view a copy
if you don't happen to have your computer with you...

and it's somewhat necessary to have an online copy
so that all the search engines will find and index it...

but to the extent that we can, i think we _should_ avoid
repetitive downloading of the same thing over and over.
given my mindset, the browser method is entirely wrong.
and a "flyover" model, where every page gets downloaded
even if you just wanna see one of them, is super-wasteful.

so, what banana-cream does is to download the scan
for a page _if_ it doesn't already have it, then saves it
for future use.   you can have it download a scan when
you call up a page for which you don't have the scan,
_or_ you can tell it to download all the scans in bulk,
it will do this mass-downloading in the background...

so you can download a book at night while you sleep
(if you're on dial-up, in which case a complete scanset
might take 4-8 hours), or while you're out having lunch
(if you're on a fat pipe, where it'll take 20-60 minutes)...

once downloaded, that scan-set exists on your disk,
so you get fast load-times and don't use bandwidth...

heck, on a fat pipe, you can even do proofing _during_
the downloading, since each scan downloads in just a
few seconds; thus, you can "thumb through" the book
at a relatively fast pace without having to wait much...

even a relatively simple task -- let's say ensuring that
all the paragraph breaks were recognized in the o.c.r.
-- takes more time per page than the download does.

once i release banana-cream, you'll see this yourself...

hopefully all this makes you see why spending time and
energy honing a browser interface isn't that important...

-bowerbird

p.s.   here's the u.r.l. of my "flyover" example:
>    http://snowy.arsc.alaska.edu/bowerbird/tolbk-old/tolbk.html
please be warned that this is only for people with a fat pipe,
since 70+ largish images will be loaded onto a single page...