STARLIMS + Google Data Connector = lazy lab data

Yup; almost a full year without a post: shame on me. But here I am, with something cool for you STARLIMS users/developers/adminstrators!

Recently, I was looking for a good reporting tool – thinking to replace SAP’s Crystal Reports – and somehow stumbled upon Google Data. Google Data is kind of a mix in between Microsoft Power BI, Tableau, QLik and any and all Data visualization/analytics/findagoodbuzzword tools.

… Except it is free. Kind of.

DISCLAIMER: This project is in progress and I do this on my free time at home. I do not want to release the google script code just yet, but I do want to share progress and see where it takes me. Please contact me directly if you wish to know more and want to use this. Right now, this is closer to prototype than anything else.

There are good tutorials on writing your own Data Connectors. Google is really working hard to get developers on board. Of course, since this is not a very well known tool yet, it is still quite tricky to get going, even when following the said tutorials.

But chances are that if you read this, you know me. And you know this is the kind of things that intrigues me. Challenge! New playground! Undiscovered lands! So here I am: I have given it a try, and I will share my work here. Freely. Eventually, who knows; maybe someone somewhere will use it!

About STARLIMS…

Given the nature of STARLIMS, you can kind of reverse-engineer the code so I will, aside from giving you here a (somewhat) working solution, explain what I have done and share the connector’s code so you can make it better.

To know more about STARLIMS, visit their web site: https://www.informatics.abbott/

To learn how to build your own Google Data Connector, head to https://developers.google.com/datastudio/connector/get-started

That’s where I started.

Google Data Connector

Now, what’s behind a Google Data Connector? Basically, there are 4 functions really:

  1. Authentication. For STARLIMS, I am using the Username and Password approach. I pass these in headers, so if your connection is secured (HTTPS) then you’re good. If you use normal STARLIMS authentication, then this will work for you. Where I work, we use AD, and it works fine too.
  2. getSchema. This is critical – we need to provide the connector, at the beginning, the full schema of the data. Given the size of STARLIMS, I am opting for either
    1. Predefined entities (Folders, Orders, Ordtask or COC)
    2. QBE Templates (code-based)

I think I can manage the schemas from these quite easily. I need to know field names, labels and types, whether they are measures or dimensions; that’s it.

3. getData. This is tricky. This will pass a sub-set of the required fields, and the data must match the subset of these fields. It can be done, but one needs to create the right SQL statements based on the schema.

4. getConfig. This ties everything together. The way I built it is to define the URL of your connection (aka STARLIMS URL) and whether you will use a specific QBE or a predefined entity. When you first create a connection, it will require username and password. I am not storing these, google is; it is up to you then to decide if you trust google or not with the data…

And that’s it, really! This is the kind of charts you can do in a few clicks:

Of course, these are very simple / silly examples. But it is to show that you can create viz easily, and good looking ones at that!

My idea / goal is to extend my connector and code so you could create dynamic dashboards on your STARLIMS Data using existing QBE templates you spent hours / days to optimize and clean. I think this will be definitively worth it.

You can get started here. I will update the STARLIMS Code and enhance the connector. For now, all I do is a simple query on FOLDERS and ORDTASK to kind of have a proof of concept; it actually works quite well!

Let’s see where this gets me in the future!

Cheers,

Michel

P.S. Remember: this is work in progress and not production-ready!

Identify Server Script Performance Bottlenecks

This week was Abbott Informatics’ APAC Forum. Speaking with old colleagues, I got inspired to try to revive this site for the 4th time (or is it the 5th?).

I’m currently sitting in a performance enhancement session and thinking to myself: heh, that’s NOT how I would go about it (sorry guys!). Silver bullets? Nah.

The first step to improving performances is to identify what is slow (duh!). What are the bottleneck? Why is it slow?

As a STARLIMS developer, I know that oftentimes, the code written in there is not necessarily the most efficient one. Therefore, why not start by monitoring the performances of, let’s say, SSL scripts, which represent the backbone of the business layer?

I’m thinking: why not have a simple tool that will record, like a stop watch, all block of code execution time, and then provide a report I can read? Heck, .NET has a StopWatch class! Hey! STARLIMS IS .Net!

The more I think about it, the more I consider: let’s do it!

How do we do this?

First, let’s create a class. I like classes. I like object-oriented code. I like the way it looks in SSL afterward, and it makes it way easier to scale through inheritance later on. Question is: what should the class do?

Well, thinking out loud, I think I want it to do is something like this:

  1. Start the stop watch
  2. Do something
  3. Monitor event 1
  4. Do something
  5. Monitor event 2
  6. Do something else
  7. Monitor event x
  8. so on and so forth
  9. Provide a readable report of all the event duration

I also want it to count the number of time an event run, and I want to know the AVERAGE time gone in there as well as the TOTAL time this event took.

Now that I know what I want, let’s write the class that will do it (for the sake of this example, I created it in the Utility global SSL category).

:CLASS perfMonitor;

:DECLARE oStopWatch;
:DECLARE nLastCheck;
:DECLARE aEvents;
:DECLARE WriteToLog;

:PROCEDURE Constructor;
:PARAMETERS bAutoStart;
:DEFAULT bAutoStart, .T.;
Me:WriteToLog := .F.;
Me:nLastCheck := 0;
Me:aEvents := { {"Event", "Total Duration", "# of calls", "Avg Duration"} };
Me:oStopWatch := LimsNetConnect("System", "System.Diagnostics.Stopwatch");
:IF bAutoStart;
	Me:Start();
:ENDIF;
Me:WriteToLog := .T.;
:ENDPROC;

:PROCEDURE Monitor;
:PARAMETERS sMessage;
:DEFAULT sMessage, "";
:DECLARE nElapsed, nDuration, i, bNew;
:IF Me:oStopWatch:IsRunning;
	Me:oStopWatch:Stop();
	nElapsed := Me:oStopWatch:ElapsedMilliseconds;
	nDuration := nElapsed - Me:nLastCheck;
	:IF Me:WriteToLog;
		UsrMes("Performance Monitor ==> " + LimsString(nDuration) + " ms. Message: " + sMessage);
	:ENDIF;
	Me:nLastCheck := nElapsed;
	bNew := .T.;
	:FOR i := 1 :TO Len(Me:aEvents);
		:IF Me:aEvents[i][1] == sMessage;
			Me:aEvents[i][2] += nDuration;
			Me:aEvents[i][3] += 1;
			Me:aEvents[i][4] := Me:aEvents[i][2] / Me:aEvents[i][3];
			bNew := .F.;
		:ENDIF;
	:NEXT;
	:IF bNew;
		aAdd(Me:aEvents, { sMessage, nDuration, 1, nDuration });
	:ENDIF;
	Me:oStopWatch:Start();
:ENDIF;
:ENDPROC;

:PROCEDURE Restart;
Me:Monitor("Internal Restart");
Me:oStopWatch:Restart();
:ENDPROC;

:PROCEDURE Stop;
Me:Monitor("Internal Stop");
Me:oStopWatch:Stop();
:ENDPROC;

:PROCEDURE Start;
Me:Monitor("Internal Start");
Me:oStopWatch:Start();
:ENDPROC;

:PROCEDURE ToString;
:RETURN BuildString2(Me:aEvents, CRLF, "	");
:ENDPROC;

The above gives us an object we can start, restart, and monitor events (messages). At the end, we use typical ToString() and will have our “report”. Example of using this:

:DECLARE oPerformanceMonitor;
oPerformanceMonitor := CreateUdObject("SGS_Utility.perfMonitor");
lWait(1.3); /* fake doing something that takes time;
oPerformanceMonitor:Monitor('Step 1');
lWait(0.8); /* fake doing something that takes time;
oPerformanceMonitor:Monitor('Step 2');
lWait(1.1); /* fake doing something that takes time;
oPerformanceMonitor:Monitor('Step 3');
lWait(1.45); /* call Step 1 again to generate an aggregate;
oPerformanceMonitor:Monitor('Step 1');
:RETURN oPerformanceMonitor:ToString();

If I run the above, the output will look like

Event	Total Duration	# of calls	Avg Duration
Step 1	2025	2	1012.5
Step 2	1015	1	1015
Step 3	1011	1	1011

I have been using this in many places in our system and it did help me to find the best places to optimize our code. Sometimes, the same insert will run 500 times and will total up to 15 seconds; that is worse than one call that runs only once and take 3 seconds (at least for the end user).

I hope this can help you find the bottlenecks of your SSL code!