TIMEOFDAY(3) // quello che mi manca select * ; from cur_articoli; where not exist; (select top 1 * from articolicaratteristiche where articolicaratteristiche.cod_art = cur_articoli.cod_art ; and articolicaratteristiche.caratteristica = cur_articoli.caratteristica); into cursor cur_dainserire readwrite select cur_dainserire brow ? "1" ? TIMEOFDAY(4) // prova performance select * from cur_articoli_x into cursor cur_articoli readwrite select * from cur_articoli_x where .f. into cursor cur_dainserire readwrite select cur_articoli scan select articolicaratteristiche seek str_pad(cur_articoli.cod_art, 16, " ") + cur_articoli.caratteristica if !found() select cur_articoli scatter memvar select cur_dainserire append blank gather memvar endif endscan select cur_dainserire brow
in this code, the first select, with “not exist” is, in practice, equal to the second loop.
I need to select all the “cur_articoli” records, not contained on “articolicaratteristiche“.
the explain on first query:
Create temporary index on 'COD_ART+CUR_ARTICOLI.CARATTERISTICA' for join Table 'articolicaratteristiche' has 5283 records Table '_00000e380038' has 1905 records Processing table 'articolicaratteristiche' sequentially WHERE condition for table '_00000e380038' could not be optimized Total I/O read operations was 5245119 No records selected
So, the first select = 46.053 seconds
the loop, only 2.077…
Is possible to speed the first query?
This is a really convoluted query.
You are selecting from a table then checking the existence of records on the same table inside a nested exists select statement!
Also your timing is after the browse so it does not reflect the query its the full operation of display also.
If certain operations can be speeded up with a mixture of SQL and NoSQL then you have a workaround.
NoSQL will always be faster than SQL as it is using indexes and working directly with the data not a temporary subset of data which SQL does as it is based on sets of data.
it looks like it cannot be optimized as you are performing a nested EXISTS subquery for each record.
This is performing a lot of I/O.
Yes you are performing a join for every record inside the exists subquery.
when I start the app using a realistic dataset (that is about half the size of the client’s dataset) the app takes about 3 or 4 minutes to load.
Lianja tables. Imported from VFP. 25 Pages
It sounds like you are reopening tables over and over and scanning through them at load time. There are attributes to prevent this happening.
1) use VT’s, not tables, with the WHERE clause set to 1=0 until the page is viewed. That way you will be loading data for one page, not 25.
2) Never pull an entire table, I don’t know if that’s what you are doing, but in SQL backends that’s the way it is done. This is even more of an issue in browser/mobile apps, but also in Lianja apps if you haven’t played with LSQL Caching. VFP uses extensive caching which makes this invisible in many cases (it turns out that this is the secret to VFP’s data speed, not rushmore, as determined by actual testing done by Christof).
3) Ensure you have all the indexes needed for the queries you are using. One table scan on a large table is enough to wreck loading times, even when returning an appropriate dataset.
The navigation bar slider that is used to navigate between records in an App requires it to know the total number of records in a table based on the filter and the number of deleted records. This requires a scan through the table which is fine with smallish tables that are not on a network drive.
A good performance gain can be achieved with large tables and/or network drives by hiding the navigation bar slider.
In Lianja v2.1 there is a new page attribute called “Deferred load“. If this is checked then data is not loaded into the associated page until the page is activated and becomes the current page. So in other words, the loading of the page with data and the refresh of this data does not occur for hidden pages until they are first activated.
By default “Deferred load” is off. Bear in mind that if you check this on the data in the page will not be accessible until the page has been activated at least once. In many cases this is acceptable.
This speeds up App load time quite significantly when an App consists of a large number of pages.
Lianja 2.1 the App inspector “Events” now display an elapsed time for each operation. This can help you pinpoint slow parts in your apps with a view to optimizing their performance.
if you find any odd behavior with App loading let me know and as a workaround put –nodeferredload as a command line switch as deferred page loading is on by default.
You can use the App Inspector when loading an App to see what’s going on and what is the performance bottleneck if any in your Apps.
I have now enabled SMARTQUERY caching for network drives in Lianja v2.1.
This is my test on a database and the performance gains I am now seeing by tuning the database engine with the commands detailed below.
clear smartquery // this clears the existing SMARTQUERY cache set smartquery on set transaction isolation level repeatable read set share on set networkshare on set icache to 1024 set dcache to 8192*8 open database bdficc set explain on SELECT * FROM cliente,historia into cursor testa nomemo WHERE HISTORIA.CL=CLIENTE.CCL ******************************************************************************** Explaining execution plan for command: select * FROM cliente,historia into cursor testa nomemo WHERE HISTORIA.CL=CLIENTE.CCL -------------------------------------------------------------------------------- No smartquery cache available Optimized JOIN to table 'historia' using index tag 'INDH2' SET EXCLUSIVE OFF SET NETWORKSHARE ON SET SMARTQUERY ON SET DCACHE ON (Table page cache) SET DCACHEBLOCKSIZE TO 8192 (Size of table cache pages) SET DCACHESIZE TO 65536 (Number of table pages to cache) SET ICACHE TO 1024 (Index node cache) SET SQLCACHE ON (SQL Query cache) SET SQLCACHESIZE TO 65536 (Size of SQL Query cache extents) SET SQLBUFFERSIZE TO 1024 (Number of SQL write cache buffers) SET TRANSACTION ISOLATION LEVEL REPEATABLE READ requires shared(read) lock on tables and indexes Performing join Parent table 'cliente' has 6469 records, table size is 7.00MB, DCACHE is ON, DCACHEBLOCKSIZE 8912, record size 1114, records per page 8 Child table 'historia' has 45619 records, table size is 63.00MB, DCACHE is ON, DCACHEBLOCKSIZE 8912, record size 1453, records per page 6 Processing parent table 'cliente' sequentially Total I/O read operations was 51030 Total I/O read size was 71.4874MB Total I/O write size was 0.0000MB Total SEEK operations performed was 6470 Total SEEK operations failed was 1593 Total locks performed 0 Total rlocks performed was 7 Total unlocks performed was 3 (4 active) Total I/O read cache hits was 55328 Total I/O read cache misses was 7818 Total I/O index cache hits was 21561 Total I/O index cache misses was 586 Join completed in 7.176 seconds Transaction isolation level requires shared(read) unlock of tables and indexes Save as CURSOR TESTA Save as c:\users\barrym~1\appdata\local\temp\_00001ab00008.dbf Inserting 34227 rows of size 2526 without memos using buffersize 161664, records per page 64 Rows inserted in 6ms Total I/O read operations was 51037 Total I/O read size was 71.4971MB Total I/O write size was 82.4523MB Total SEEK operations performed was 6472 Total SEEK operations failed was 1595 Total locks performed 7 Total rlocks performed was 8 Total unlocks performed was 15 (0 active) Total I/O read cache hits was 55329 Total I/O read cache misses was 7819 Total I/O index cache hits was 21561 Total I/O index cache misses was 587 34227 records selected in 7.737 seconds
Now I execute the same query which is now SMARTQUERY cached.
SELECT * FROM cliente,historia into cursor testa nomemo WHERE HISTORIA.CL=CLIENTE.CCL
******************************************************************************** Explaining execution plan for command: select * FROM cliente,historia into cursor testa nomemo WHERE HISTORIA.CL=CLIENTE.CCL -------------------------------------------------------------------------------- Using smartquery cache file C:\Lianja\data\system\_sq00001ab01.dbf Save as CURSOR TESTA Total I/O read operations was 8 Total I/O read size was 0.0106MB Total I/O write size was 0.0000MB Total SEEK operations performed was 7 Total SEEK operations failed was 0 Total locks performed 0 Total rlocks performed was 8 Total unlocks performed was 8 (0 active) Total I/O read cache hits was 15 Total I/O read cache misses was 6 Total I/O index cache hits was 11 Total I/O index cache misses was 2 34227 records selected in <1ms
Notice that SMARTQUERY has detected that the query was previously executed and is now cached so it just uses that resultset. If there are any changes to the data in the FROM tables of the SQL SELECT then the SMARTQUERY cache for that query is thrown away and re-executed. You can specify the “lifetime” of the resultset using SET SMARTQUERY TO nSeconds, in which case if any of the tables have changed within nSeconds of the last query execution then the resultset cache will be thrown away and the query re-executed.
At any time you can throw away all of the SMARTQUERY cache by executing CLEAR SMARTQUERY.
In many applications real-time queries for BI and reporting are not required and you can work with a snapshot of data. You can either SET SMARTQUERY TO a high number e.g. 60*60*24 which is 24 hours or alternatively COPY DATABASE TO and report from the snapshot of the data.
If you want to perform a query and ignore the SMARTQUERY cache specify the NOSMARTQUERY keyword on the SQL SELECT.
SELECT * FROM cliente,historia into cursor testa nomemo nosmartquery WHERE HISTORIA.CL=CLIENTE.CCL
Or alternatively, you can force SMARTQUERY for a particular SQL SELECT like this:
SELECT * FROM cliente,historia into cursor testa nomemo smartquery WHERE HISTORIA.CL=CLIENTE.CCL
Working with huge amounts of data for BI and reporting is speeded up dramatically using SMARTQUERY.
Note that the smartquery cache is shared across all users on a system, so if one user performs a query and it becomes cached, if there are no changes made to the tables in the query other users use the cached resultset.
On a heavily loaded WORM (Write Once Read Many) database such as one used as a data warehouse this results is massive performance gains.
The smartquery cache is persistent across server reboots.
For anyone interested the SMARTQUERY resultset cache for each SQL SELECT can be found in:
These are kept in the server system database directory and shared across all desktop LAN users, Lianja SQL Server connections, Web and Mobile users.
For optimum application performance, If you have Microsoft security essentials or other malware protection be sure to Exclude lianja.exe and lianjaruntime.exe from real-time protection.
I have recently been doing some speed comparisons between Lianja and SQL Server.
For an apples to apples comparison, you need to return your SQL Server in the data-bound grid since the Lianja browse command comes up in a data-bound grid.
In SQL Server select your table, right click it and select “Edit top 200 rows”.
It will then return the top two hundred rows. Right click the anywhere in the result set > Pane > SQL
Modify the SQL and remove the Top(200). Then re-run the query into the data bound grid.
In my case, I have a wide table with 100,000 rows. In SQL Server, on a box with 40 cores and 512 GB of memory, the result returns in 11 seconds. In Lianja, it returns in 1 second via the browse command.
Both sets of data had the same indexes created.
This is opening up some eyes are my firm.
If speed is important to your web app, you need to really have a go at using Odata update.
The following article on the main website covers the LianjaCloudDataServices.js library (which is included in Lianja v2.0) which you can use to perform OData CRUD operations on any server side database from any client side framework.
I just wanted to share the HUGE app load speed improvements we have experienced with v2.1 beta
We run a large app via RDP on big production servers.
Historically, one of our frustrations has been that big apps take a while to load in Lianja – well not any more. Many improvements but especially the deferred load feature make a real difference.
Below are times from entering password in appcenter to 1st page of app appearing.
Note the 14 secs time is the same whether running the –rdp switch or not