Hrorm Forums

Embedded database use case


#1

For projects getting off the ground, the table creation feature (Schema) isn’t really a nice addition, its an absolute necessity. And I think it could use some refinement.

Use case:
I have a school bus I’m converting into a tiny house/motorhome/etc, have been for a couple years. Its close to done, and I have mostly DC appliances, all are controllable via relays setup on an Arduino board. I use momentary push buttons for my light switches with a total of 9 zones.

Arduino cannot reliably sense simple button pushes without a resistor, so I’ve instead opted to use a DIY joystick kit to sense the button presses. HRORM’s job in this context is to capture any and all events.

Architecture: I’m trying to integrate HRORM into a CDI deployment (thorntail).

Problem: HRORM starts with the assumption you have the necessary tables and columns. This is ill suited when booting a project from scratch using an embedded-database approach. H2 is actually a fairly decent datastore with all the necessary features you’d expect. The current year argument is strong here- there’s quite a few database technologies out there that you really can hit the ground running with zero configuration, it really kinda sucks that the most robust datastores largely lack this.

A flow like:

if schema !exists AND I should create schemas ->
  create schema; 
  if table !exists AND I should create tables -> // each table
    create table;
    if column !exists AND I should create columns // each column
      alter table create column
      if column type !matches expected type AND I should alter columns
         alter table alter column

Where “I should” could be dao flags like:
daoBuilder.createSchema(true).createTables(true).createColumns(true).alterColumns(true).build(connection).

Dao would need to validate the schema at build time if any of these were true. .build(connection) could also take a Schema object, or list of Descriptors to take into consideration for validation.

None of this is to imply any of this is super-simple or in scope. A baby step in this direction would be to modify Schema.java to conditionally create things based on presence using INFORMATION_SCHEMA queries ( in the query returned by schema.sql() ). This would avoid an exception running the operation multiple times. You could also have methods in Dao like boolean tableExists(), boolean schemaExists(), etc. Something to prevent the user from having to go back to the Connection object for answers.

A more advanced solution would be for hrorm to ascertain the status of the datastore itself via validation- which could produce a List: CreateTableRemediation, AlterColumnRemediation, etc. Then for each, hrorm checks to see if it is both authorized by the user to act on these steps and supports automatic remediation for the type. Anything it cannot act upon (UnsupportedOperation or otherwise), it would throw an exception containing all steps that needed to be done and could not be acted upon.

Alter is particularly tricky to automate… lets say I change:
.withStringColumn(…) to .withIntegerColumn(…). An AlterColumnRemediation could suggest three courses of action to the user: use .withConvertingStringColumn(…) instead, migrate the data manually or switch (back) to .withStringColumn(…). Nothing HRORM can do automatically. And that’s fine.

My other challenges so far:
Unfortunately, most of the Java Arduino control suites I tried failed miserably on my Mega2560, I had to write my own, causing some headache. Same problem with joystick input- Java libs pertaining to joystick input failed me spectacularly- jinput being the only real option, it cannot detect joystick disconnect, at least on Linux, and fail to detect new connections.

I use joystick input to direct the Java EE app on what zones (sets of relays with a list of acceptable permutations) to cycle by firing a CDI event. I first want to record all events to an h2 database created on the fly using hrorm. This is done so I can handle various types of button presses- long press, short press, double press, etc, an have a log on all activity.


#2

This sounds like a pretty cool and involved project. I would never have the chutzpah to try to convert a school bus into anything.

Some random comments about the hrorm-y aspects of your post.


#3

On the dynamic creation of tables, the current problem I’ve run into is the creations are unconditional. First boot of the application, everything works. Second boot, exceptions are thrown because the code attempts to create the tables again. In a test where everything is thrown away after, you’re not going to run into this. There’s no mechanism in hrorm to guard from this, I had to go back to the Connection object.

I suppose I can check if the H2 database file exists prior to creating tables. That might make it cleaner.

Migrations: I have a couple branches off liquibase, I’ve written features supporting ADW (like keyless daos did for us, yay!) but I don’t even get a response from the maintainers on PRs. The project is likely dead. I thought it would be cool to create some way for the Dao to facilitate basic updates to entities in code- adding or changing fields. Seems like an ALTER TABLE CREATE COLUMN + UPDATE [table] SET new_column = someFunction(old_column) pattern could work well here, but there’s alot of cognitive complexity in any solution.

For better or for worse, seems like more and more storage technologies are winning on the basis of zero-configuration. Perhaps hrorm is just good at reading existing data and not meant to be a go-to for prototyping a new app from scratch, or maybe there is a balance to be struck there. All I know is, if I need to talk to a SQL database and my current architecture doesn’t have a solution in place already, I’m picking either JPA or HRORM, the former if my use case is write-heavy, the latter if more read-only.

Hmm. foldingSelect was kind of careless of me. You’re right in it being inefficient, the count there is more debugging information for me as I’m developing this. I may have asked why there isn’t a .count(where) on the KeylessDao interface, I forget. If I had to guess it would be to keep the function count down, but I guess in my opinion it just seems simpler and more obvious to have additional functions than a “doFunction” argument taking a parameter. Just about anyone typing “dao.” into their IDEs are going to expect to see something like “.count()” in the suggestion list.


#4

A few pics of the project:

Automation hardware
ima_e6a5afd_15%25

Drivers area, openings are for the automation wiring I’m still finishing
ima_d761fdc_15%25

View backward
ima_4ea8693_15%25

Bedroom
ima_44e547d_15%25

Light switches/buttons
ima_42ef16c_15%25

Bit of a mess, pics are a little old.


#5

Yeah, the functions are fine. Its been a while since I really dove into the codebase so I probably need to learn first and see how its currently done before I go suggesting things =).

I think my idea of checking for the file is enough for my biggest problem. I should just blow old data away anyway in the event of a schema change. Maybe I could version Daos by package or something and manually migrate data on startup if a new schema version is adopted.


#6

Quick note before I leave for work: perhaps the Validator framework could be of some help to you when you are running the application? It does some of what you want. Or at least, it checks for the existence of the table. Gotta run.


#7

A couple of comments on why I went with runFunction() instead of count(), sum(), average() …

  • I thought it was more extensible if new functions get added. Right now there are five, and as you mentioned, that would be five more functions on the Dao.
  • I wanted to retain polymorphism in the return type. You can get a long or a BigDecimal. Again, if we start counting functions, that means you need countLong, countBigDecimal, and all the other variants. It starts to be annoying. Plus, you lose the really nice name “count”, or at least have to choose which return type gets it.
  • I felt it was more orthogonal to how the actual SQL looks and works. You don’t run a SQL COUNT statement, you run a SELECT COUNT().
  • You still have to specify a column to act on, so, it’s not super clean in interface anyway. I guess for a Keyed Dao, you could add a helper method count(Where) with the idea being you are counting primary keys.

None of that proves anything, and it certainly could be done other ways, but that was how I thought about it.


#8

In terms of the schema generation, I think the validators almost do a big chunk of what you want.

The problem is, the interface really works like a validator. The return type is void, and then it throws an exception with a bunch of concatenated error messages in it. The idea is that would be captured in logs or on the console when application or tests are run.

It shouldn’t be too hard to give a more programmatic interface though. Something that has an interface like:

boolean tableExists();
boolean sequenceExists();
List<String> missingColumns();
List<String> incorrectlyTypedColumns();

Or something of that nature.

That would be a pretty small amount of code to change in hrorm, and maybe gives you a big percentage of the value you want.

In terms of doing actual migrations, I am much more hesitant to bite that off. It seems like a pretty deep hole to me, but maybe some great ideas are out there.

I’m disappointed to hear about your experience with liquibase. I don’t think it’s the easiest thing to use, but it does do a job. Maybe they are not in such a hurry to support free users right now? I think they have a paid service. But, really, I have no idea. I just used the free version for relatively simple things.


#9

It would be pretty neat to have a flow where one could both identify and selectively create the missing schema bits. That’s kind of a “Cadillac” solution. I just wonder if you’ve thought about how someone would use HRORM iteratively, expanding on their schema as they develop.

Perhaps we can tackle this in isolation instead of taking on the “migration” boss- how would one add a column to an existing entity? Lets just find the roughest edges we hit and smooth them out a little as we’re able.

Also, I know you don’t like reflection, but have you thought of using interfaces + InvocationHandler for the immutables problem? Your Immutable Dao builder could take even less arguments this way (wouldn’t need to specify setters). Method references could still be used for the getters, I think.

It kinda forces the implementer into treating these entities differently too, but wouldn’t change your interface contracts all that much.I can prototype something up if you’re interested. Seems like a neat solution from the implementer’s side, at least, but does require some “magic” to use those method references to find the fields.


#10

So, not only have I thought about it, I’ve lived it. I am developing an application using hrorm that has gone through a lot of schema changes, which I managed using liquibase! :smiley:

So, I think my proposal of expanding the capabilities of the validator class will actually fit your bill. It will tell you exactly which columns are missing from the underlying schema. From there it’s a short jump to generating the alter table statements you need.

But I must admit, I am quite hesitant to actually generate that SQL within hrorm. I think the problems of migration are really big, and putting a strange little add column feature in just looks like a very funny carbuncle to me. But maybe I could come around to it. But, yeah, I think the Validator upgrade would be step one if we go down this road.

I don’t understand your suggestion for the immutable dao builder. Since I changed that stuff to have abstract base classes, I feel like maintaining all the builder types is much easier. Though, some people (maybe including me) have a little bit of an eyeball bleed when I look at the types. Oh well.


#11

This is when Ruby developers roll their eyes:

public abstract class AbstractDaoBuilder<ENTITY, ENTITYBUILDER, B extends AbstractDaoBuilder<?,?,?>>
    extends AbstractKeylessDaoBuilder<ENTITY, ENTITYBUILDER, B>
    implements DaoDescriptor<ENTITY,ENTITYBUILDER>, SchemaDescriptor<ENTITY, ENTITYBUILDER>

#12

Whoa- I’m not even asking for alter really, just ways to detect the schema matches. I don’t think its reasonable to ask hrorm to try and replace liquibase. Anyone who spelunks into a codebase like liquibases’ is just bound to get cynical… I think its a good tool, I just… wouldn’t write it the same way. :grimacing:

Its much less my gripe about BUILDER than I just think its more “java-like”- with the builder pattern you have to have a second builder class, right? My proposal is to replace both the builder and entity classes with interfaces:

public interface ImmutablePerson {
    String getName();
    int getAge();
    String getEmailAddress();
}

This is something the implementer writes.

HRORM then has a class that extends java.lang.reflect.InvocationHandler. Its job is to manufacture Proxy instances of these interfaces. Your ImmutableDaoBuilder has methods then only taking getters (Function<ImmutablePerson, T> method references).

Method references can still be used here by creating a proxy instance, .apply(immutablePerson)ing the getter against it, and capturing the java.lang.reflect.Method signature that gets invoked. For example, you’ll now know that Method with name “getName” returning “String” and taking no arguments applies to the getter passed for the “name” String column this way.

As a side effect, this would eliminate builder. I think the downsides are clear: Its proxies. Well, I’d argue that its a part of Java the language, and doesn’t run afowl of your aversion to lazy operations. Its kind of magic-y though in a way that makes some cringe.

The upside is, I don’t have to have builder classes anymore, let alone the BUILDER generic- I have to use interfaces which are inherently immutable. I don’t think there’s any contract HRORM can actually enforce to make ENTITY actually immutable- what prevents one from putting setters on the ENTITY class anyway? The feature was built for the use case, at a cost of some cognitive complexity, but at the end of the day, both BUILDER and ENTITY refer to classes, of any kind, BUILDER just having some Function<BUILDER, ENTITY> on it.

To feed HRORM entities to save, the implementer would have ways to construct instances like:

public static ImmutablePerson person(final String name, final int age, final String emailAddress) {
    return new ImmutablePerson() {
        @Override
        public String getName() {
            return name;
        }

        @Override
        public int getAge() {
            return age;
        }

        @Override
        public String getEmailAddress() {
            return emailAddress;
        }
    };
}

“ImmutablePerson” could just be “Person” and the implementer could make a concrete “MutablePerson” class (implementing Person). Hrorm can treat these all the same though due to polymorphism.

Pushing back on the Ruby developers (I knew one):

“Yeah, yeah- but at least we’ll still be here in 40 years.”


#13

I feel my explanation is insufficient, let me just show you:

You should be able to try it out by copying those four classes and running:

InterfaceBuilderParadigm.demo();

Output:

INFO: Entity column id refers to field/getter getId which returns class java.lang.Long (expected class java.lang.Long)
Oct 16, 2019 3:32:30 PM io.freedriver.autonomy.hrorm.mock.InterfaceBuilderParadigm showOff
INFO: Entity column name refers to field/getter getName which returns class java.lang.String (expected class java.lang.String)
Oct 16, 2019 3:32:30 PM io.freedriver.autonomy.hrorm.mock.InterfaceBuilderParadigm showOff
INFO: Entity column age refers to field/getter getAge which returns class java.lang.Long (expected class java.lang.Long)
Oct 16, 2019 3:32:30 PM io.freedriver.autonomy.hrorm.mock.InterfaceBuilderParadigm showOff
INFO: Entity column email refers to field/getter getEmailAddress which returns class java.lang.String (expected class java.lang.String)

You then make another InvocationHandler down the line, ResultProxy or something, that returns the appropriate column value for the method invoked.

The most compelling advantage, is it makes the Dao building VERY simple. The implementer only needs the interface! No builder pattern classes to generate, no setters.

InterfaceBuilderParadigm<Person> personDaoBuilder = new InterfaceBuilderParadigm<>(Person.class)
            .withIntegerColumn("id", Person::getId)
            .withStringColumn("name", Person::getName)
            .withIntegerColumn("age", Person::getAge)
            .withStringColumn("email", Person::getEmailAddress);

#14

Expanded the example to show what the ResultInvocationHandler could look like. New output:

Oct 16, 2019 3:56:59 PM io.freedriver.autonomy.hrorm.mock.InterfaceBuilderParadigm showOff
INFO: Entity column id refers to field/getter getId which returns class java.lang.Long (expected class java.lang.Long)
Oct 16, 2019 3:56:59 PM io.freedriver.autonomy.hrorm.mock.InterfaceBuilderParadigm showOff
INFO: Entity column name refers to field/getter getName which returns class java.lang.String (expected class java.lang.String)
Oct 16, 2019 3:56:59 PM io.freedriver.autonomy.hrorm.mock.InterfaceBuilderParadigm showOff
INFO: Entity column age refers to field/getter getAge which returns class java.lang.Long (expected class java.lang.Long)
Oct 16, 2019 3:56:59 PM io.freedriver.autonomy.hrorm.mock.InterfaceBuilderParadigm showOff
INFO: Entity column email refers to field/getter getEmailAddress which returns class java.lang.String     (expected class java.lang.String)
Oct 16, 2019 3:56:59 PM io.freedriver.autonomy.hrorm.mock.InterfaceBuilderParadigm testGetter
INFO: Fetching column value "id" from Person
Oct 16, 2019 3:56:59 PM io.freedriver.autonomy.hrorm.mock.InterfaceBuilderParadigm testGetter
INFO: Found 9223372036854775807
Oct 16, 2019 3:56:59 PM io.freedriver.autonomy.hrorm.mock.InterfaceBuilderParadigm testGetter
INFO: Fetching column value "name" from Person
Oct 16, 2019 3:56:59 PM io.freedriver.autonomy.hrorm.mock.InterfaceBuilderParadigm testGetter
INFO: Found ojplg
Oct 16, 2019 3:56:59 PM io.freedriver.autonomy.hrorm.mock.InterfaceBuilderParadigm testGetter
INFO: Fetching column value "age" from Person
Oct 16, 2019 3:56:59 PM io.freedriver.autonomy.hrorm.mock.InterfaceBuilderParadigm testGetter
INFO: Found 9223372036854775807
Oct 16, 2019 3:56:59 PM io.freedriver.autonomy.hrorm.mock.InterfaceBuilderParadigm testGetter
INFO: Fetching column value "email" from Person
Oct 16, 2019 3:56:59 PM io.freedriver.autonomy.hrorm.mock.InterfaceBuilderParadigm testGetter
INFO: Found ojplg@hrorm.org

#15

Hey,

Sorry I never got back to you on this. I got really busy and was ignoring hrorm and everything else for a while. And then I plum forgot about this. I just checked in since I am announcing the new release of hrorm. I will take a look through this thread over Thanksgiving and try to remember what it was all about.


#16

Again, sorry that it took so long to get back to you.

The question of this thread was about schema creation and migration.

I still think the first step is just to make it possible to programatically inspect the results of a schema validation. That tells you when a particular table or column is missing or of the wrong type. After that, well, I do not have a great story to tell about that. But I think the validator extension is pretty straight-forward. If you think that’s a good idea, I would be happy to add it.