// quello che mi manca
        select * ;
                from cur_articoli;
                where not exist;
                        (select top 1 * from articolicaratteristiche where articolicaratteristiche.cod_art = cur_articoli.cod_art ;
                                and articolicaratteristiche.caratteristica = cur_articoli.caratteristica);
                into cursor cur_dainserire readwrite
                select cur_dainserire
? "1"
// prova performance
        select * from cur_articoli_x into cursor cur_articoli readwrite
        select * from cur_articoli_x where .f. into cursor cur_dainserire readwrite
        select cur_articoli
                select articolicaratteristiche 
                seek str_pad(cur_articoli.cod_art, 16, " ") + cur_articoli.caratteristica 
                if !found()
                        select cur_articoli
                        scatter memvar
                        select cur_dainserire
                        append blank
                        gather memvar
                select cur_dainserire

in this code, the first select, with “not exist” is, in practice, equal to the second loop.

I need to select all the “cur_articoli” records, not contained on “articolicaratteristiche“.
the explain on first query:

Create temporary index on 'COD_ART+CUR_ARTICOLI.CARATTERISTICA' for join
Table 'articolicaratteristiche' has 5283 records
Table '_00000e380038' has 1905 records
Processing table 'articolicaratteristiche' sequentially
WHERE condition for table '_00000e380038' could not be optimized
Total I/O read operations was 5245119
No records selected

So, the first select = 46.053 seconds
the loop, only 2.077…
Is possible to speed the first query?
This is a really convoluted query.
You are selecting from a table then checking the existence of records on the same table inside a nested exists select statement!
Also your timing is after the browse so it does not reflect the query its the full operation of display also.
If certain operations can be speeded up with a mixture of SQL and NoSQL then you have a workaround.
NoSQL will always be faster than SQL as it is using indexes and working directly with the data not a temporary subset of data which SQL does as it is based on sets of data.
it looks like it cannot be optimized as you are performing a nested EXISTS subquery for each record.
This is performing a lot of I/O.
Yes you are performing a join for every record inside the exists subquery.

when I start the app using a realistic dataset (that is about half the size of the client’s dataset) the app takes about 3 or 4 minutes to load.
Lianja tables. Imported from VFP. 25 Pages
It sounds like you are reopening tables over and over and scanning through them at load time. There are attributes to prevent this happening.
Some suggestions:

1) use VT’s, not tables, with the WHERE clause set to 1=0 until the page is viewed. That way you will be loading data for one page, not 25.

2) Never pull an entire table, I don’t know if that’s what you are doing, but in SQL backends that’s the way it is done. This is even more of an issue in browser/mobile apps, but also in Lianja apps if you haven’t played with LSQL Caching. VFP uses extensive caching which makes this invisible in many cases (it turns out that this is the secret to VFP’s data speed, not rushmore, as determined by actual testing done by Christof).

3) Ensure you have all the indexes needed for the queries you are using. One table scan on a large table is enough to wreck loading times, even when returning an appropriate dataset.

The navigation bar slider that is used to navigate between records in an App requires it to know the total number of records in a table based on the filter and the number of deleted records. This requires a scan through the table which is fine with smallish tables that are not on a network drive.

A good performance gain can be achieved with large tables and/or network drives by hiding the navigation bar slider.

In Lianja v2.1 there is a new page attribute called “Deferred load“. If this is checked then data is not loaded into the associated page until the page is activated and becomes the current page. So in other words, the loading of the page with data and the refresh of this data does not occur for hidden pages until they are first activated.

By default “Deferred load” is off. Bear in mind that if you check this on the data in the page will not be accessible until the page has been activated at least once. In many cases this is acceptable.

This speeds up App load time quite significantly when an App consists of a large number of pages.

Lianja 2.1 the App inspector “Events” now display an elapsed time for each operation. This can help you pinpoint slow parts in your apps with a view to optimizing their performance.

if you find any odd behavior with App loading let me know and as a workaround put –nodeferredload as a command line switch as deferred page loading is on by default.
You can use the App Inspector when loading an App to see what’s going on and what is the performance bottleneck if any in your Apps.

I have now enabled SMARTQUERY caching for network drives in Lianja v2.1.
This is my test on a database and the performance gains I am now seeing by tuning the database engine with the commands detailed below.

clear smartquery // this clears the existing SMARTQUERY cache 
set smartquery on
set transaction isolation level repeatable read
set share on
set networkshare on
set icache to 1024
set dcache to 8192*8
open database bdficc
set explain on
SELECT * FROM cliente,historia into cursor testa nomemo WHERE HISTORIA.CL=CLIENTE.CCL

Explaining execution plan for command:
select * FROM cliente,historia into cursor testa nomemo WHERE HISTORIA.CL=CLIENTE.CCL
No smartquery cache available
Optimized JOIN to table 'historia' using index tag 'INDH2'
SET DCACHE ON (Table page cache)
SET DCACHEBLOCKSIZE TO 8192 (Size of table cache pages)
SET DCACHESIZE TO 65536 (Number of table pages to cache)
SET ICACHE TO 1024 (Index node cache)
SET SQLCACHESIZE TO 65536 (Size of SQL Query cache extents)
SET SQLBUFFERSIZE TO 1024 (Number of SQL write cache buffers)
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ requires shared(read) lock on tables and indexes
Performing join
Parent table 'cliente' has 6469 records, table size is 7.00MB, DCACHE is ON, DCACHEBLOCKSIZE 8912, record size 1114, records per page 8
Child table 'historia' has 45619 records, table size is 63.00MB, DCACHE is ON, DCACHEBLOCKSIZE 8912, record size 1453, records per page 6
Processing parent table 'cliente' sequentially
Total I/O read operations was 51030
Total I/O read size was 71.4874MB
Total I/O write size was 0.0000MB
Total SEEK operations performed was 6470
Total SEEK operations failed was 1593
Total locks performed 0
Total rlocks performed was 7
Total unlocks performed was 3 (4 active)
Total I/O read cache hits was 55328
Total I/O read cache misses was 7818
Total I/O index cache hits was 21561
Total I/O index cache misses was 586
Join completed in 7.176 seconds
Transaction isolation level requires shared(read) unlock of tables and indexes
Save as c:\users\barrym~1\appdata\local\temp\_00001ab00008.dbf
Inserting 34227 rows of size 2526 without memos using buffersize 161664, records per page 64
Rows inserted in 6ms
Total I/O read operations was 51037
Total I/O read size was 71.4971MB
Total I/O write size was 82.4523MB
Total SEEK operations performed was 6472
Total SEEK operations failed was 1595
Total locks performed 7
Total rlocks performed was 8
Total unlocks performed was 15 (0 active)
Total I/O read cache hits was 55329
Total I/O read cache misses was 7819
Total I/O index cache hits was 21561
Total I/O index cache misses was 587
34227 records selected in 7.737 seconds

Now I execute the same query which is now SMARTQUERY cached.

SELECT * FROM cliente,historia into cursor testa nomemo WHERE HISTORIA.CL=CLIENTE.CCL

Explaining execution plan for command:
select * FROM cliente,historia into cursor testa nomemo WHERE HISTORIA.CL=CLIENTE.CCL
Using smartquery cache file C:\Lianja\data\system\_sq00001ab01.dbf
Total I/O read operations was 8
Total I/O read size was 0.0106MB
Total I/O write size was 0.0000MB
Total SEEK operations performed was 7
Total SEEK operations failed was 0
Total locks performed 0
Total rlocks performed was 8
Total unlocks performed was 8 (0 active)
Total I/O read cache hits was 15
Total I/O read cache misses was 6
Total I/O index cache hits was 11
Total I/O index cache misses was 2
34227 records selected in <1ms

Notice that SMARTQUERY has detected that the query was previously executed and is now cached so it just uses that resultset. If there are any changes to the data in the FROM tables of the SQL SELECT then the SMARTQUERY cache for that query is thrown away and re-executed. You can specify the “lifetime” of the resultset using SET SMARTQUERY TO nSeconds, in which case if any of the tables have changed within nSeconds of the last query execution then the resultset cache will be thrown away and the query re-executed.

At any time you can throw away all of the SMARTQUERY cache by executing CLEAR SMARTQUERY.

In many applications real-time queries for BI and reporting are not required and you can work with a snapshot of data. You can either SET SMARTQUERY TO a high number e.g. 60*60*24 which is 24 hours or alternatively COPY DATABASE TO and report from the snapshot of the data.

If you want to perform a query and ignore the SMARTQUERY cache specify the NOSMARTQUERY keyword on the SQL SELECT.

SELECT * FROM cliente,historia into cursor testa nomemo nosmartquery WHERE HISTORIA.CL=CLIENTE.CCL

Or alternatively, you can force SMARTQUERY for a particular SQL SELECT like this:

SELECT * FROM cliente,historia into cursor testa nomemo smartquery WHERE HISTORIA.CL=CLIENTE.CCL

Working with huge amounts of data for BI and reporting is speeded up dramatically using SMARTQUERY.

Note that the smartquery cache is shared across all users on a system, so if one user performs a query and it becomes cached, if there are no changes made to the tables in the query other users use the cached resultset.
On a heavily loaded WORM (Write Once Read Many) database such as one used as a data warehouse this results is massive performance gains.

The smartquery cache is persistent across server reboots.

For anyone interested the SMARTQUERY resultset cache for each SQL SELECT can be found in:


These are kept in the server system database directory and shared across all desktop LAN users, Lianja SQL Server connections, Web and Mobile users.

For optimum application performance, If you have Microsoft security essentials or other malware protection be sure to Exclude lianja.exe and lianjaruntime.exe from real-time protection.

I have recently been doing some speed comparisons between Lianja and SQL Server.

For an apples to apples comparison, you need to return your SQL Server in the data-bound grid since the Lianja browse command comes up in a data-bound grid.

In SQL Server select your table, right click it and select “Edit top 200 rows”.
It will then return the top two hundred rows. Right click the anywhere in the result set > Pane > SQL
Modify the SQL and remove the Top(200). Then re-run the query into the data bound grid.

In my case, I have a wide table with 100,000 rows. In SQL Server, on a box with 40 cores and 512 GB of memory, the result returns in 11 seconds. In Lianja, it returns in 1 second via the browse command.

Both sets of data had the same indexes created.

This is opening up some eyes are my firm.

If speed is important to your web app, you need to really have a go at using Odata update.
The following article on the main website covers the LianjaCloudDataServices.js library (which is included in Lianja v2.0) which you can use to perform OData CRUD operations on any server side database from any client side framework.…-in-lianja-v20

I just wanted to share the HUGE app load speed improvements we have experienced with v2.1 beta
We run a large app via RDP on big production servers.
Historically, one of our frustrations has been that big apps take a while to load in Lianja – well not any more. Many improvements but especially the deferred load feature make a real difference.
Below are times from entering password in appcenter to 1st page of app appearing.

Note the 14 secs time is the same whether running the –rdp switch or not




The main benefit of SET SMARTQUERY ON is that it will cache query results persistently (even across reboots) and only throw the smartquery cache away if any of the tables involved in a query has been modified since the query was cached. For huge tables with complex joins this results in massive performance gains.

You typically use SMARTQUERY on WORM tables i.e. Write Once Read Many. If you think about it, why even bother to perform query optimization if you already have a resultset and the tables in the query have not been modified in any way for the exact same query. This is the basis of intelligent queries and that’s what SMARTQUERY technology provides for you.

In this latest beta, I am now seeing SQL selects on tables containing 5,000,000 rows in <1ms.

Some of my WebApps run ok in WebPreview but very slowly (or not) with the Browser (Firefox, localhost).
It seems to depend on the complexity of the App (# of sections, javascripts)…
Are there hardware specifications for the Cloudserver? (f.e. processor(s), speed, memory).
I have to find out what’s wrong with my little (UTF8) app.
I don’t think UTF-8 has any effect on the performance. I use it exclusively for all the web apps i write.
You must be doing something that is causing multiple refreshes to a page.

If you install “fiddler” (just google it) you can use that to watch requests and results sent back from the server.
I would recommend using Google Chrome or IE. I’m not a firefox fan.

Is there any performance advantage in case i import this table to my Lianja DB or the sql execution/performance will be same?
yes the native format is faster particularly when multiuser and the SQL query optimizer will perform better if you have indexes in columns that you are querying on.

If that’s a 1Gbs LAN then your file copy is only 88Mbs which is pretty slow.

There could be many reasons for this so before you look at your application you need to sort this out first.

Make sure that you do not have conflicting ip addresses on the LAN .

Google “windows test lan speed” and make sure the connections are ok.

Rule #1 check the Ethernet cables and replace them to make sure
Rule #2 check your Ethernet switch
Rule #3 run performance monitor on the server to see if something is eating cpu, disk or network
Rule #4 make sure virus checkers aren’t killing performance by temporarily disabling them


If you have 1 million records and you have created indexes on the columns that you want to query on, and lets say that only 100 records match the query then lianja will only read 100 records. Once.
You can use the EXPLAIN command in the console to verify that you have correctly created indexes that will be used by the sql query optimizer.

Now with some tuning.

set icache to 5000
set dcache to optimum
set dcache on
// now running the same code as stated above

Results are:

Appended 180,000 records in 2.563 secs
Replaced 180,000 records in 2.320 secs
Indexed 180,000 records in 4.580 secs
Replaced 180,000 indexed records in 1.999 secs

So, if its data crunching you want to do you need to play with the DCACHE and ICACHE settings which affect the internal cache sizes for indexes and database tables.


create table test (name char(20), product char(30), value numeric(10,2))
use test
append blank 180000
? "Appended 180,000 records in " + timeofday(4) + " secs"
replace all name with upper(name)
? "Replaced 180,000 records in " + timeofday(4) + " secs"
index on upper(name) tag name
? "Indexed 180,000 records in " + timeofday(4) + " secs"
replace all name with upper(name)
? "Replaced 180,000 indexed records in " + timeofday(4) + " secs"
Appended 180,000 records in 2.601 secs
Replaced 180,000 records in 2.087 secs
Indexed 180,000 records in 6.273 secs
Replaced 180,000 indexed records in 2.013 secs

It’s worth noting that Lianja creates full balanced B+ tree indexes with no duplicates. These are optimized for multi-user network access.
Note the use of the built-in timeofday( ) function which you can place in your code to find “hotspots” that may be causing issues.
timeofday( 0 ) resets the timer.
timeofday( 4 ) returns the time since the last time it was called with msecs granularity.

Lianja uses cursor record buffering so when you move off record A it will automatically be written.
So the answer is, no you can’t be editing 2 records from the same table at the same time without having committed the first one.

In fact the SQL query optimizer uses the indexes in the same way as rushmore. Its nothing special in that regard.
By using the FOR clause you are telling Lianja to ignore the index and “touch” every record. To get the behavior you want, try using SEEK and COUNT WHILE.

using SET TIMELINE OFF will speed this up quite a bit too.

which method provides better performance: case statements or if statements?
The compiler will optimize anyway. There is no difference in performance.
If Your code is merely trying to assign a value based on a condition you should also look at IIF() and ICASE() in the doc wiki. These are faster than using compiled statements for simple values.