Release 0.9.0 has some changes.
- Support for schemas https://hrorm.org/documentation/0.9.0/#schema
- Switch from
LocalDateTime
toInstant
for time stamps https://hrorm.org/documentation/0.9.0/#dates_and_times
Release 0.9.0 has some changes.
LocalDateTime
to Instant
for time stamps https://hrorm.org/documentation/0.9.0/#dates_and_times
The switch to Java 8 Instant is a welcome change.
With LocalDateTime, its easy to unintentionally convert your timezone on read or type conversion. The only way for hrorm’s reads to be accurate would be for the user to specify the ZoneOffset when building the DAOs. java.util.Date is also entirely insufficient in this regard. Maybe it would be good to add a withZonedDateTimeColumn method that takes a ZoneOffset so one can choose to reify their Instants to ZonedDateTime, which takes the ambiguity out of it. This is plenty to get the user what they need already, though.
Generally, it is best practice to store all times as UTC in your databases. I’ve seen nightmarish REST APIs and datastores that mixed timestamp zones- gigantic headaches always follow.
I think Schema is most useful for testing and for small apps that embed H2 or another Java-based database. With NitriteDB, MongoDB, etc docstore databases, one never really has to be concerned whether the datastore exists before reading from or writing to it- and that’s less a utilitarian concern as it is an ease of use concern. For a tiny app with its own embedded database, this is an awesome feature to have.
I think I agree about UTC, but here’s a recent article that says different.
It’s very C#/Noda time specific, but I think the points are general.
It was because of your generative tests that I found the problem. I’m not sure it was ever a hrorm bug, it was just something that’s easy to get wrong. It’s still easy to get wrong (as you point out) but at least hrorm somewhat forces you to look at what you’re doing now.
So, thanks for the tests you contributed, they are really challenging the code.
This was done more for testing than anything. Plus, it was just easy. Hrorm knew everything it needed to to make it possible. It’s a pretty small piece of code.
One of hrorm’s design goals is to not impinge upon your Java model in any way. No interfaces to implement or annotations to inject.
I hate the way some ORMs want to own the object model. That’s just all sick and wrong. With hrorm that object model can be in a completely distinct library that is compiled and distributed on its own. Just because you have code that depends on an object model is no reason your code should be dependent upon a persistence framework. Persistence is an important concern, but one that not all applications need to care about just because they depend on some objects.
Timezone/UTC/Instant: The only other consideration I’d make would be epoch milli / epoch nano, which has its own set of challenges, but at least as of Java 8 its pretty easy to calculate zoned times using either.
Schema: This also seems to highlight a weakness in HRORM I noticed early on. As a developer, I tell hrorm what my object model looks like, but nothing about the datastore, really. HRORM infers what the column is based on the object model I present. This causes an obvious issue when these assumptions end up being wrong- In the temporal examples, what if I’m just storing my times as epoch with a bigint field? Or the bug I filed earlier for booleans (stored as bit type rather than ‘T’ or ‘F’ as assumed)?
HRORM can create a schema that matches these assumption using the DaoDescription, but you can also make DaoDescriptions where HRORM makes the wrong assumptions about the columns data types. Converting column methods is effective here, but the real issue is how HRORM makes assumptions about how to interact with JDBC (Instant -> Timestamp, Boolean -> String or Boolean -> Number for instance).
You can’t handle every scenario, but some databases are going to diverge from ANSI SQL types. What you can do to mitigate this is design the internals to allow a developer to define the JDBC interaction behavior- then instead of these assumptions you’ll have an API with safe defaults. I do need to take a look at 0.9.0’s code a bit to catch up on the recent developments, and port my projects over.
If I understand what you want, I think it’s pretty easy to do. Check out:
And the method withGenericColumn
in IndirectDaoBuilder
. You could also add a string to the GenericColumn
where you put in the exact SQL type you want VARCHAR2(100)
or whatever you wanted.
Does that meet your needs?
I apologize, I didn’t see your last message with the GenericColumn code!
I’ll take some time tonight to go over your 0.10.0 release code.