Skip to content

antoyo/tql

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TQL

Compile-time ORM, inspired by Django ORM, written in Rust. Tql is implemented as a procedural macro and even works on the stable version of Rust (look at this example to see how to use tql on stable).

This library is in alpha stage: it has not been thoroughly tested and its API may change at any time.

master tql Lobby Donate Patreon orange

Requirements

Currently, tql only supports the PostgreSQL and SQLite databases (more databases will be supported in the future). So, you need to install PostgreSQL (and/or libsqlite3-sys) in order to use this crate.

Usage

First, add this to you Cargo.toml:

[dependencies]
chrono = "^0.4.0"
tql_macros = "0.1"

[dependencies.tql]
features = ["chrono", "pg"]
version = "0.1"

[dependencies.postgres]
features = ["with-chrono"]
version = "^0.15.1"

(You can remove the chrono stuff if you don’t want to use the date and time types in your model.)

Next, add this to your crate:

#![feature(proc_macro_hygiene)]

extern crate chrono;
extern crate postgres;
extern crate tql;
#[macro_use]
extern crate tql_macros;

use postgres::{Connection, TlsMode};
use tql::PrimaryKey;
use tql_macros::sql;

Then, create your model:

use chrono::DateTime;
use chrono::offset::Utc;

#[derive(SqlTable)]
struct Model {
    id: PrimaryKey,
    text: String,
    date_added: DateTime<Utc>,
    // …
}

Next, create an accessor for your connection:

fn get_connection() -> Connection {
    Connection::connect("postgres://test:test@localhost/database", TlsMode::None).unwrap()
}

Finally, we can use the sql! macro to execute an SQL query:

fn main() {
    let connection = get_connection();

    // We first create the table.
    // (You might not want to execute this query every time.)
    let _ = sql!(Model.create());

    // Insert a row in the table.
    let text = String::new();
    let id = sql!(Model.insert(text = text, date_added = Utc::now())).unwrap();

    // Update a row.
    let result = sql!(Model.get(id).update(text = "new-text"));

    // Delete a row.
    let result = sql!(Model.get(id).delete());

    // Query some rows from the table:
    // get the last 10 rows sorted by date_added descending.
    let items = sql!(Model.sort(-date_added)[..10]);
}

The sql!() macro uses the identifier connection by default.

Look at the following table to see more examples.

Usage with SQLite

First, change the postgres dependency to this one:

rusqlite = "^0.13.0"

Then, change the features of the tql dependency:

[dependencies.tql]
features = ["sqlite"]
version = "0.1"

In the Rust code, the connection needs to come from rusqlite now:

use rusqlite::Connection;

fn get_connection() -> Connection {
    Connection::open("database.db").unwrap()
}

And the rest is the same.

Using on stable Rust

If you want to use tql on stable, there are a few changes that are required in order to work:

First, remove these lines:

#![feature(proc_macro_hygiene)]

// …

use tql_macros::sql;

And add the following line before extern crate tql:

#[macro_use]

This is how the start of the file now looks:

extern crate chrono;
extern crate postgres;
#[macro_use]
extern crate tql;
#[macro_use]
extern crate tql_macros;

use postgres::{Connection, TlsMode};
use tql::PrimaryKey;

Finally, disable the unstable feature by updating the tql dependency to:

[dependencies.tql]
default-features = false
features = ["chrono", "pg"]
version = "0.1"

With this small change, we can use the sql!(), but it now requires you to specify the connection:

let date_added = Utc::now();
let id = sql!(connection, Model.insert(text = text, date_added = date_added)).unwrap();

Also, because of limitations on the stable compiler, you cannot use an expression for the arguments anymore: that’s why we now create a variable date_added. For now, if you use tql on stable, you need to use identifiers or literals for arguments.

Why not always using the stable version?

Procedural macros do not currently support emitting errors at specific positions on the stable version, so with this version, you will get errors that are less useful, like in the following output:

error[E0308]: mismatched types
  --> src/main.rs:47:18
   |
47 |     let result = sql!(Model.insert(text = text, date_added = Utc::now(), done = false));
   |                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected &str, found struct `std::string::String`
   |
   = note: expected type `&str`
              found type `std::string::String`
   = help: try with `&sql!(Model.insert(text = text, date_added = Utc::now(), done = false))`
   = note: this error originates in a macro outside of the current crate

While you will get this nicer error when using the nightly version of Rust:

error[E0308]: mismatched types
  --> examples/todo.rs:49:46
   |
49 |     let result = sql!(Model.insert(text = text, date_added = Utc::now(), done = false));
   |                                           ^^^^
   |                                           |
   |                                           expected &str, found struct `std::string::String`
   |                                           help: consider borrowing here: `&text`
   |
   = note: expected type `&str`
              found type `std::string::String`

So, a good workflow is to develop on nightly and then ship on stable. This way, you get the best of both worlds: you have nice errors and you can deploy with the stable version of the compiler. This is not an issue at all because you’re not supposed to have compiler errors when you’re ready to deploy (and you can see the errors anyway).

Note
Compile with RUSTFLAGS="--cfg procmacro2_semver_exempt" to get even better error messages.

Syntax table

The left side shows the generated SQL and the right side shows the syntax you can use with tql.

SQL Rust
SELECT * FROM Table
Table.all()
SELECT * FROM Table WHERE field1 = 'value1'
Table.filter(field1 == "value1")
SELECT * FROM Table WHERE primary_key = 42
Table.get(42)


Table.filter(primary_key == 42)[0..1];
SELECT * FROM Table WHERE field1 = 'value1'
Table.get(field1 == "value1")


Table.filter(field1 == "value1")[0..1];
SELECT * FROM Table WHERE field1 = 'value1' AND field2 < 100
Table.filter(field1 == "value1" && field2 < 100)
SELECT * FROM Table WHERE field1 = 'value1' OR field2 < 100
Table.filter(field1 == "value1" || field2 < 100)
SELECT * FROM Table ORDER BY field1
Table.sort(field1)
SELECT * FROM Table ORDER BY field1 DESC
Table.sort(-field1)
SELECT * FROM Table LIMIT 0, 20
Table[0..20]
SELECT * FROM Table
WHERE field1 = 'value1'
  AND field2 < 100
ORDER BY field2 DESC
LIMIT 10, 20
Table.filter(field1 == "value1" && field2 < 100)
    .sort(-field2)[10..20]
INSERT INTO Table(field1, field2) VALUES('value1', 55)
Table.insert(field1 = "value1", field2 = 55)
UPDATE Table SET field1 = 'value1', field2 = 55 WHERE id = 1
Table.get(1).update(field1 = "value1", field2 = 55);


Table.filter(id == 1).update(field1 = "value1", field2 = 55);
DELETE FROM Table WHERE id = 1
Table.get(1).delete();


Table.filter(id == 1).delete()
SELECT AVG(field2) FROM Table
Table.aggregate(avg(field2))
SELECT AVG(field1) FROM Table1 GROUP BY field2
Table1.values(field2).annotate(avg(field1))
SELECT AVG(field1) as average FROM Table1
GROUP BY field2
HAVING average > 5
Table1.values(field2).annotate(average = avg(field1))
    .filter(average > 5)
SELECT AVG(field1) as average FROM Table1
WHERE field1 < 10
GROUP BY field2
HAVING average > 5
Table1.filter(field1 < 10).values(field2)
    .annotate(average = avg(field1)).filter(average > 5)
SELECT Table1.field1, Table2.field1 FROM Table1
INNER JOIN Table2 ON Table1.pk = Table2.fk
#[derive(SqlTable)]
struct Table1 {
    pk: PrimaryKey,
    field1: i32,
}

#[derive(SqlTable)]
struct Table2 {
    field1: i32,
    fk: ForeignKey<Table1>,
}

Table1.all().join(Table2)
SELECT * FROM Table1 WHERE YEAR(date) = 2015
Table1.filter(date.year() == 2015)
SELECT * FROM Table1 WHERE INSTR(field1, 'string') > 0
Table1.filter(field1.contains("string"))
SELECT * FROM Table1 WHERE field1 LIKE 'string%'
Table1.filter(field1.starts_with("string"))
SELECT * FROM Table1 WHERE field1 LIKE '%string'
Table1.filter(field1.ends_with("string"))
SELECT * FROM Table1 WHERE field1 IS NULL
Table1.filter(field1.is_none())
SELECT * FROM Table1 WHERE field1 REGEXP BINARY '\^[a-d]'
Table1.filter(field1.regex(r"\^[a-d]"))
SELECT * FROM Table1 WHERE field1 REGEXP '\^[a-d]'
Table1.filter(field1.iregex(r"\^[a-d]"))
CREATE TABLE IF NOT EXISTS Table1 (
    pk INTEGER NOT NULL AUTO_INCREMENT,
    field1 INTEGER,
    PRIMARY KEY (pk)
)
#[derive(SqlTable)]
struct Table1 {
    pk: PrimaryKey,
    field1: i32,
}

Table1.create()

Donations

If you appreciate this project and want new features to be implemented, please support me on Patreon.

become a patron button