J勉オンライン
J-Ben Online

This is an experimental attempt at porting J-Ben to a web app. This is a work in progress.

If you're here looking for the standalone J-Ben program:

Updates

2013-12-06

Implemented secondary constraints on the range and set APIs. The REST APIs are now fully capable of searching for characters by Jouyou level, newspaper frequency, JLPT level or numeric (integer) dictionary index codes.

While the intent is to power the J-Ben web interface, I've documented the APIs so others may experiment.

Granted, my database is not optimized and such queries may be slow, so if you like this, contact me and I'll see about publishing the sources for the REST API. (They're in source control; this is not difficult, I just have not bothered since I haven't yet seen demand. They would be MIT (Expat) licensed to boot.)

2013-12-05

Implemented generic search APIs for querying KANJIDIC2 by exact match, set, or integer range. Parameters include the table name and field name, allowing querying of kanji by Jouyou grade and newspaper frequency ranking. (Note: this is just in the backend API; the front-end does not provide this functionality yet.)

In order to provide querying by dictionary codes, I need to provide a way of specifying extra constraints. Hoping to add this soon.

2013-12-04

Removed jQuery Mobile usage. Maybe I'm just an HTML 5 novice, but JQM feels strongly like its own mini-language which I'm not enthralled enough with to invest deeply in.

Unsure about Skeleton. I may continue to use it, I may discontinue it. It is used on the current mock of the Kanji Study List page. It looks nice, but I'm not sure whether I want to use it or just roll my own styling. I'm leaning towards just getting my app to *work*, after which I may add eye candy.

Spent some time to smooth my workflow. Time well spent; now I can update much closer to real time. (Was having issues w/ Emacs/SSH/Windows, and GoDaddy seems to time out its SSH sessions so I can't reliably code via SSH/Emacs. Spent time to write some sync scripts to quickly pull/push files; this is working nicely.)

My next development task is the APIs to drive the Kanji Study List. Additionally I may spend some time to document how I am querying the database backend, for anyone who happens to use my backend scripts for creating their own JMdict/KANJIDIC2 databases in MySQL.

Reminder to self: now that I'm not dealing with JQM, rework to use XHTML 5 instead of HTML 5. This was my original intent and I deliberately went away from XHTML 5 for JQM to work properly...

2013-09-07

Major improvements to kanji search.

JSON blobs have now been replaced with the first version of rendered kanji results. All data available via KANJIDIC2 should now be available via this search; anything missing is considered a bug.

Stroke order diagrams are also shown with each record. I used an open-source Kanji stroke order font and ImageMagick to generate the over 13,000 kanji SODs now available via kanji search. Basically, there should be a SOD for each entry in KANJIDIC2, minus one rather rare character.

The character search page also now supports prepopulating queries via the query parameter. Here is an example.

Pieces remaining:

2013-09-06

Hooked the character search page up to the back-end API. Technically, it "works"; you can search for characters and results are returned. However, currently you just get pretty-printed JSON blobs.

I'm working on pre-rendering some kanji stroke-order diagrams from https://sites.google.com/site/nihilistorguk/. I will try to hook these into the "nice" rendering of the dictionary results.

I may not have this up right away; this is a "spare time" project and I don't work on it all the time. It'll be done when it's done. That being said, I hope simple kanji searching will be usable and semi-user-friendly by the end of the month.

2013-08-21

2013-08-20

Wrote a script to auto-generate MySQL databases from both JMdict and KANJIDIC2. This is similar to the previous JBLite package I wrote, however it's not intended as a library to abstract access nor does it contain any prewritten schema. It merely scans an XML file twice, creating the tables based upon the first pass, and populating them on the second pass.

This script was written as a way of "future-proofing" an XML-to-MySQL conversion of Jim Breen's XML database files. I think it's considerably slower than JBLite, but it consumes far less memory, being based on the Expat SAX parser rather than ElementTree. And, since database creation is not intended to be handled user-side, it's pretty irrelevant if it takes extra time.

I will upload this script to GitHub in the near future.

2013-08-05

Updated full UI to use jQuery Mobile. Certainly not perfect, and I've lost some of the original styling. However, this probably will provide a better cross-platform experience.

Steps on the to-do list:

2013-08-03

Starting to use UI toolkits... Looking into jQuery Mobile; it seems their IE support has improved and likely they are passable as a general purpose toolkit which can target both phones and desktop.

J-Ben was a desktop app, but since I need to learn a new GUI toolkit as-is, I may as well try one which works on my Android phone as well. If I'm bringing J-Ben to the web, I may as well make it widely accessible if it is feasible to do so.

2013-08-02

Used floats to align some controls to the right. Hoping I can find a better way, but it'll work for now. Added initial mock for kanji drilling.

2013-07-31

Added stubs for word/character search, modeled after the original J-Ben interface. Struggling with getting the layout similar to before; unsure how to do something like GTK+ hboxes where you can tell one element to expand to fill remaining space...

2013-07-28

Reworked to use CSS tables (display: table, etc.). Updated vocab study list layout. Added kanji study list page; currently all buttons are no-ops but the styling looks similar to legacy J-Ben.

Last modified: November 26 2016 07:36:28 UTC