Ok, we all played with it a bit. We’ve seen what it can do – or at least some of it. I was first impressed, then disappointed, and sometimes somewhat in between. How can it really help me? I thought.
I always want to learn new things. On my list I had Nodejs, React and MongoDB… So I on-boarded my copilot: ChatGPT, for a test drive!
BTW: that application is not secured, I didn’t want to spend too much money on this (SSL, Hosting and whatnot), so don’t go around using sensitive information 🙂
Write down in the comments if you want to explore further the result, I can create an account for you 😁 or maybe even spin up a dedicated instance for you (I want to try that at some point!!)
Finally. It was about time someone did something about it and would advertise. Automatic testing of STARLIMS (actually, any REST-supported app!). Ask anyone working with (mostly) any LIMS, and automatic regression tests are not often there at all, even less automated.
I understand it is difficult to automate the front end, which is what tends to break… Nonetheless, I had this idea – please read through! – and I think there’s an easy way to automate some regression tests to a STARLIMS instance. Here’s my arguments why it brings value:
Once a framework is in place, efforts are what you put in. It can be a lot of efforts, or minimal. I would say aim minimal efforts at the beginning. Read on, you’ll understand why.
Focus on bugs. Any bug fixed in the backend, prepare a regression test. Chances are you’re doing it anyway (write a test script to check your script runs?)
For features, just test the default parameters at first. You can go ahead and do more, but this will at least tell you that the script still compiles and handle default values properly.
You CAN and SHOULD have a regression test on your datasources! At least, do a RunDS(yourDatasource) to check it compiles. If you’re motivated, convert the Xml to a .NET Datasource and check that the columns you’ll need are there.
Pinpoint regression tests. You fixed a condition? Test that condition. Not all conditions. Otherwise it becomes a unit test, not a regression test.
The idea is that every day / week / whatever schedule you want, ALL OF YOUR REGRESSION TESTS WILL RUN AUTOMATICALLY. Therefore, if one day you fix one condition, the next day you fix something else in the same area, well, you want both your fix AND the previous condition fix to continue to work. As such, the value of your regression tests grows over time. It is a matter of habits.
What you’ll need
POSTMAN – This will allow you to create a regression collection, add a monitor, and know when a regression fails. This is the actual too.
REST API – We will be running a series of script from POSTMAN using STARLIMS REST API. You’ll see, this is the easy part. If you followed how to implement new endpoints, this will be a breeze.
STARLIMS Setup
In STARLIMS, add a new server script category for your API. In my case, I call it API_Regression_v1. This will be part of the API’s route.
Add a new script API_Regression_v1.Run. This is our request class for the POST endpoint. The code behind will be simple: we receive a script category in the parameters, and we run all children scripts. Here’s a very simple implementation:
:CLASS Request;
:INHERIT API_Helper_Custom.RestApiCustomBase;
:PROCEDURE POST;
:PARAMETERS payload;
:DECLARE ret, finalOutput;
finalOutput := CreateUdObject();
:IF !payload:IsProperty("category");
finalOutput:StatusCode := Me:HTTP_NOT_FOUND;
finalOutput:response := CreateUdObject();
finalOutput:response:StatusCode := Me:HTTP_NOT_FOUND;
:RETURN finalOutput;
:ENDIF;
/* TODO: implement script validation (can we run this script through regression test? / does it meet regression requirements?) ;
ret := Me:processCollection( payload:category );
finalOutput:StatusCode := Me:HTTP_SUCCESS;
finalOutput:response := ret;
finalOutput:response:StatusCode := Me:HTTP_SUCCESS;
:RETURN finalOutput;
:ENDPROC;
:PROCEDURE processCollection;
:PARAMETERS category;
:DECLARE scripts, i, sCatName, output, script;
output := CreateUdObject();
output:success := .T.;
output:scripts := {};
sCatName := Upper(category);
scripts := SQLExecute("select coalesce(c.DISPLAYTEXT, c.CATNAME) + '.' +
coalesce(s.DISPLAYTEXT, s.SCRIPTNAME) as s
from LIMSSERVERSCRIPTS s
join LIMSSERVERSCRIPTCATEGORIES c on s.CATEGORYID = c.CATEGORYID
where c.CATNAME like ?sCatName?
order by s", "DICTIONARY" );
:FOR i := 1 :TO Len(scripts);
script := CreateUdObject();
script:scriptName := scripts[i][1];
script:success := .F.;
script:response := "";
:TRY;
ExecFunction(script:scriptName);
script:success := .T.;
:CATCH;
script:response := FormatErrorMessage(getLastSSLError());
output:success := .F.;
:ENDTRY;
aAdd(output:scripts, script);
:NEXT;
:RETURN output;
:ENDPROC;
As you can guess, you will NOT want to expose this endpoint in a production environment. You’ll want to run this on your development / test instance; whichever makes most sense to you (maybe both). You might also want to add some more restrictions, like only if category starts with “Regression” or something along thos lines… I added a generic setting called “/API/Regression/Enabled” with a default value of false to check the route (see next point), and a list “/API/Regression/Categories” to list which category can be run (whitelisted).
Next, you should add this to your API route. I will not explain here how to do this; it should be something you are familiar with. Long story short, API_Helper_Customer.RestApiRouter should be able to route callers to this class.
POSTMAN setup
This part is very easy. Create yourself a new collection – something like STARLIMS Regression Tests v1. Prepare this collection with an environment so you can connect to your STARLIMS instance.
One neat trick: prepare your collection with a pre-request script that will make it way easier to use. I have this script I tend to re-use every time:
// get data required for API signature
const dateNow = new Date().toISOString();
const privateKey = pm.environment.get('SL-API-secret'); // secret key
const accessKey = pm.environment.get('SL-API-Auth'); // public/access key
const method = request.method;
const apiMethod = "";
var body = "";
if (pm.request.body && pm.request.body.raw){
body = pm.request.body.raw;
}
// create base security signature
var signatureBase = `${url}\n${method}\n${accessKey}\n${apiMethod}\n${dateNow}\n${body}`;
// get encoding hash of signature that starlims will attempt to compare to
var data = CryptoJS.enc.Utf8.parse(signatureBase);
const hash = CryptoJS.HmacSHA256(data, privateKey);
const encodedHash = encodeURIComponent(CryptoJS.enc.Base64.stringify(hash));
// create headers
pm.request.headers.add({"key":"SL-API-Timestamp", "value":dateNow});
pm.request.headers.add({"key":"SL-API-Signature", "value":encodedHash});
pm.request.headers.add({"key":"SL-API-Auth", "value":accessKey});
pm.request.headers.add({"key":"Content-Type", "value":"application/json"});
Note: In the above code, there’s a few things you need to initialize in your environment; pay attention to the pm.environment.get() variables.
Once that is done, you add one request in your collection of type POST. Something that looks like this:
POST request example
See the JSON body? We’re just telling STARLIMS “this is the category I want you to run”. With the above script, all scripts in this category will run when this request is sent. And since every script is in a try/catch, you’ll get a nice response of all scripts success (or failure).
Let’s create at least 1 regression test. In the category (in my case, My_Regression_POC), I will add one script named “compile_regression_framework”. The code will be very simple: I just want to make sure my class is valid (no typo and such).
:DECLARE o; o := CreateUdObject("API_Regression_v1.run"); :RETURN .T.;
Setup a POSTMAN monitor
Now, on to the REALLY cool stuff. In POSTMAN, go to monitor:
POSTMAN – Monitor option
Then just click the small “+” to add a new monitor. It is very straightforward: all you need to pick is a collection (which we created earlier) and an environment (which you should have by now). Then setup the schedule and the email in case it fails.
Setting up a Monitor
And that’s it, you’re set! You have your framework in place! The regression tests will run every day at 3pm (according to the above settings) and if something fails, I will receive an email. This is what the dashboard looks like after a few days:
Monitor Dashboard
Next Steps
From here on, the trick is organizing regression scripts. In my case, what I do is
I create a new category at the beginning of a sprint
I duplicate the request in the collection, with the sprint name in the request’s title
I change the JSON of the new request to mention the new category
Then, for every bug fixed during that sprint, I create a regression script in that category. That script’s purpose is solely to test what was fixed.
What happens then is every day, all previous sprints regression tests run, plus the new ones! I end up having a lot of tests.
Closing notes
Obviously, this does not replace a good testing team. It only supports them by re-testing stuff they might not think about. It also doesn’t test everything; there’s always scenarios that can’t be tested only with a server call. It doesn’t test front-end.
But still, the value is there. What is tested is tested. And if something fails, you’ll know!
One question a developer asked me on this is “sometimes, you change code, and it will break a previous test, and this test will never pass again because it is now a false test. What then?”
Answer: either delete the script, or just change the 1st line to :RETURN .T.; with a comment that the test is obsolete. Simple as that.
At some point, you can start creating more collections, adding more monitor, to split different schedules (some tests to run weekly, others daily, etc).
And finally, like I said, the complexity and how much you decide to test is really up to you. I recommend starting small; the value is there without much effort. Then, if a critical scenario arises, you can write a more complex test. That should be the exception.
One of my pet project this summer was to build a bartop arcade cabinet. I had some rpi400 laying around, which are rpi4 imbedded in a keyboard. The idea of always having a keyboard handy for the arcade cabinet sounded like a great feature, and to access it, I had to find a way to easily open the cabinet.
That’s why there’s hinges in from of the control!
All in all, building this was fun, and I decided to use Batocera.linux as the OS. It turn out to be the easiest one and most complete one, as well as the fastest one, based on my tests.
Main goal was to load MAME arcade games (tetris, pac man, Super Street Fighter 2). But I ended up putting Mario Kart N64, and it actually runs pretty good if we set resolution to 640×480 for that game.
There’s still one bug going on with Batocera – after a while we must reboot the Arcade since there seems to be a memory leak somewhere (developers are aware).
In the box, there’s
rpi400
Old 19 inches 4:3 monitor
2 set of generic dragon arcade USB controllers
HDMI to VGA active adapter (powered)
power bar outlet (re-wired to a on/off switch in the back)
Altec lansing speakers
Arcade Bartop Cabinet (no stickers)
I thought it might be interesting to should you various stages of the build, in case you are looking for some inspirations:
Initial frameHinges for the bartopStained, ready to assemble!
During the whole configuration, I had a problem. RetroPie was not able to output sound properly, and Batocera was not able to connect to WiFi. It turned out this was caused by an insufficient power in the rpi.
Lesson 1: avoid a USB sound card if you can. It draws a lot of power that can interfere with the Wifi & Bluetooth module (which is what happened to me). If you do that, try to get one that can draw its power from somewhere else. I prefer rely on the HDMI sound output.
Lesson 2: if you use an old monitor, get an Active HDMI to VGA adapter. These adapters will usually include an audio output (which solves above problem). If you use a passive adapter, the chip relies on the power provided by HDMI, which may result in black screen flickers in some games. Using an active adapter fixed the problem for me.
This is a very different topic than what I usually post, but I felt like a good place to share this!
Alright folks! If you’ve been playing with the new STARLIMS REST API and tried production mode, perhaps you’ve run into all kind of problems providing the correct SL-API-Signature header. You may wonder “but how do I generate this?” – even following STARLIMS’s c# example may yield unexpected 401 results.
At least, it did for me.
I was able to figure it out by looking at the code that reconstructs the signature on STARLIMS side, and here’s a snippet of code that works in POSTMAN as a pre-request code:
// required for the hash part. You don't need to install anything, it is included in POSTMAN
var CryptoJS = require("crypto-js");
// get data required for API signature
const dateNow = new Date().toISOString();
// thhis is the API secret found in STARLIMS key management
const privateKey = pm.environment.get('SL-API-secret');
// this is the API access key found in STARLIMS key management
const accessKey = pm.environment.get('SL-API-Auth');
// in my case, I have a {{url}} variable, but this should be the full URL to your API endpoint
const url = pm.environment.get('url') + request.url.substring(8);
const method = request.method;
// I am not using api methods, but if you are, this should be set
const apiMethod = "";
var body = "";
if (pm.request.body.raw){
body = pm.request.body.raw;
}
// this is the reconstruction part - the text used for signature
const signatureBase = `${url}\n${method}\n${accessKey}\n${apiMethod}\n${dateNow}\n${body}`;
// encrype signature
var data = CryptoJS.enc.Utf8.parse(signatureBase);
const hash = CryptoJS.HmacSHA256(data, privateKey);
const encodedHash = encodeURIComponent(CryptoJS.enc.Base64.stringify(hash));
// set global variables used in header
pm.globals.set("SL-API-Timestamp", dateNow);
pm.globals.set("SL-API-Signature", encodedHash);
One point of interest – if it still is not working, and if you can’t figure out why, an undocumented STARLIMS feature is to add this application setting in the web.config to view more info:
<add key="RestApi_LogLevel" value="Debug" />
I hope this helps you use the new REST API provided by STARLIMS!
JMeter is a load / stress tool built in Java which allows you to simulate multiple user connections to your system and monitor how the application & hardware response to heavy load.
In STARLIMS, I find it is a very good tool for performance optimization. One can detect redundant calls, chatty pieces of code and identify bottlenecks, even when running with a single user.
As a bonus, Microsoft has a preview version of load tests based on JMeter, which can be integrated to your CI/CD process!
So, in this article, my goal is to help you get started – once setup, it’s very easy to do.
I will proceed with the following assumptions:
You know your way around STARLIMS
You have some scripting knowledge
Your STARLIMS version is 12.1 or + (I leverage the REST API introduced with 12.1. It is possible to do differently, but that will be out of scope)
Xfd is the most difficult technology for this. Therefore, that’s what I will tackle. If you are running on HTML, it will be just easier, good for you!
Environment Setup
On your local PC
Install Java Runtime – you might have to reboot. Don’t worry, I’m not going anywhere!
Make sure you have access to setting up a Manual Proxy. This can be tricky and may require your administrators to enable this for you. What you’ll want is to be able to toggle it like this (don’t enable it just yet! Just verify you can):
Proxy Setup
On your STARLIMS Server
Make it available through HTTP. Yes, you have read properly, HTTP. Not HTTPS. I think it can work HTTPS, but I ran into too much problems and found out HTTP is easiest. This is to simplify traffic recording when recording a scenario for re-processing.
Create your load users. If you expect to run 100 simultaneous users, then let’s create 100! What I did is create users named LOADUSER001 to LOADUSER250 (so I would have 250 users) and have their password to something silly like #LoadUser001 to #LoadUser250. Like I said – don’t do this if there’s any sensitive data in your system.
To help you, here’s a script to generate the users:
You will need to test the above, on my system it worked fine (haha!) but setting password and security is not always working as expected in STARLIMS; so do not despair – just be patient.
Edit the web.config file. I will presume you know which one and how to achieve that. You need to change / add the following appSetting to false: <appSetting name="TamperProofCommunication" value="false" />
Add Endpoint to Encrypt function. That’s really the tricky part. In both XFD and HTML, STARLIMS “masks” the username and password when putting it in the payload for authentication, to prevent sending in clear text. But this encryption is significant; it is part of .NET and not easily integrated to JMeter… Unless it becomes a REST API endpoint!.
So, in a nutshell, the trick is to create a new API Endpoint that receives a string and a key, and call the EncryptData(text, key) function, and return the encrypted string. I will not stress it enough: do – not – enable – this – on – a -system – with – sensitive – data. And make sure you will only use load testing users. If you do so, you’re fine.
This is the code of the REST API method to expose from STARLIMS:
Since it gets exposed as a REST API, the concept is that at the beginning of the load test, for every user, we call this with the username and the password to get the encrypted version of each, which allows us to tap into STARLIMS cookie / session mechanism. Magic!
Now, we are kind of ready – assuming you’ve followed along and got everything setup properly and were able to test your API with POSTMAN or something like that. Before moving on, let’s take a look at a typical load test plan in JMeter:
Typical setup for a single scenario
The idea is we want each user (thread) to run in its own “session”. And we want each session to be for a different user. My scenarios always involve a user login into STARLIMS once (to create a session) and the to loop on running the scenario (for example, one scenario could be aboout creating folders, another scenario about entering results, etc.) . I will leave to you the details of the test plans, but the idea is you first need to login the system, then do something.
At the level of the test plan, let’s add user-defined variables – in my case, this is only so I can switch STARLIMS instances later on (I strongly recommend you do that!):
User-defined Variables
Always at the level of the test plan, add a counter:
User Counter
This will be the magic for multiple users. Note the number format – this has to match your user naming convention, otherwise, good luck!
Now, let’s have our user login STARLIMS.
Add a Transaction Controller to the Thread Group. I renamed this one “System Login” – call it what you want.
On your new transaction controller, add a Sampler > HTTP Request, which will be our call to the REST API
HTTP Request – REST API for Encrypt method
As you can see, I did a few more things than just call the API. If we break it down, I have a pre-processor “Initialize User Variables”, a HTTP Header Manager, and a JSON Extractor. Let’s look at each of these.
Pre-processor – Initialize User Variables (Beanshell preprocessor)
This will run before this call is made – every time this call is made! This is where we initialize more variables we can use in the thread.
This will initialize the currentUser and currentPW variables we can reuse later on. Since this is a pre-processor, it means the request can reference them:
Now, let’s look at the HTTP Header Manager:
HTTP Header Manager – System Login
Pretty simple – if you have STARLIMS 12.1 or +, you just need to get yourself an API key in the RestApi application. Otherwise, this whole part might have to be adjusted according to your prefered way of calling STARLIMS. But, long story short, SL-API-Auth is the header you want, and the value should be your STARLIMS secret API key.
Finally, this API will return something (the encoded string). So we need to store it in yet another variable! Simple enough, we use a post-processor JSON extractor:
JSON Extractor
What did we just do? Here’s a breakdown:
Initialized a user name and password in variables
Constructed a HTTP request with these 2 variables
Called the REST API with our secret STARLIMS key using this request
Parsed the JSON response into another variable
If you have set the thread group to simulate 10 users, then you’ll have LOADUSER001 to LOADUSER010 initialized. This is the pattern to learn. This is what we’ll be doing all along.
Wait. How did you know what to call afterward?
Great question! That’s where the proxy gets into play. Now, we don’t want to go around and guess all the calls, and, although I like Fiddler, I think it would be very complicated to use.
In a nutshell, this is what we’ll do:
We’ll add a Recording Controller to our Thread Group
Right-click on your Thread Group > Add > Logic Controller > Recording Controller
We’ll add a Test Script Recorder to our Test Plan
Right-click on your Test Plan > Add > Non-Test Elements > HTTP(S) Test Script Recorder
Change the Target Controller to your recording Controller above, so you know where the calls will go
We’ll activate the proxy (bye bye internet!)
Open Windows Settings
Look for Proxy
Change Manual Proxy > Use a proxy server to on.
Local Address = http://localhost
Port = 8888
Click Save! I didn’t realize at first there was a save button for this…
We’ll start the Test Script Recorder
Test Script Recorder
We’ll peform our action in STARLIMS
WARNING: A good practice is to change the value of Transaction name in the Recorder Transactions Control as you progress. What I typically do is put SYSTEM_LOGIN while I launch STARLIMS. Then SYSTEM_LOGIN/VALIDATE when I enter credentials, then SYSTEM_LOGIN/OK when I click OK, etc.
If all works well, you should see items being added to your Transaction Recorder.
We’ll stop the Test Script Recorder – just click on the big red Stop
We’ll deactivate the proxy (yay!) – just toggle it off.
You should have something like this in your recorder:
Recorded HTTP Requests
If, like me, you let your Outlook opened, you will have all kind of unrelated HTTP calls. Just select these and delete them. You should be left with something like this:
After 1st cleanup
Now, let’s understand what happened here. We recorded all the calls to STARLIMS. If you wish, you can remove the GetImageById lines – typically, this should not have any performance impact as these should be cached. But heh, that’s your call.
Let’s look at the 1st request:
1st HTTP Request
Interestingly enough, we can see the Protocol is http, and the Server Name is our STARLIMS server. If you created user defined variables, then you can just clean these 2 fields up (make them empty). We can default them at the test plan level (later on). But if you do that, you must do it for all requests! So, let’s not do this (just yet). Let’s leave it as is.
Now, what we want, is to re-run this so we can have actual data to work with and to make our script dynamic. But we need to record all the requests sent and received.
Right-click on your Thread Group > Add > Listener > View Results Tree
I find this listener to be the best for this activity.
Now, let’s run this “as is” clicking the play button
play
The beauty here is you can get the data sent to STARLIMS as well as the responses, allowing us to understand how everything is connected. Let’s take a look at the Authentication.GetUserInfo – that’s our first challenge:
View Results Tree
If you look at the Request Body, you’ll see your user name (which you used to login), as well as a 2nd very strange parameter that looks like the above highlighted string in kind of pink. Now, when we log into STARLIMS, we must send that string, which, essentially, is the password hash based on the user name (one-way encoding). So the question is: how do we get this? This is where our REST API, which we prepared earlier, comes into play!
Hook user variables to payload
With this, you can do everything now! Well, as far as load testing is concerned, it can at least get you started!
Earlier, I mentioned you shouldn’t leave your Server name / path / protocol in there. Indeed, in my screenshot above, you can see it’s all empty. This is because I added a HTTP Request Default to my test plan:
HTTP Request Default
You’ll also want a HTTP Cookie Manager. This one doesn’t need configuration as far as I know; but it must exist so cookies are carried over.
CONCLUSION
What?? Conclusion already? But we were just getting started! Well, don’t worry. I have done a little bit more than just that, and I am including it with this post.
You will need to figure out a few things, like some of the APIs I use and some forms/scripts that you won’t have. But this should give you a very good overview of how it works and how it is all tied in together.
Here I was trying, with the infra team, to access my Azure container through SAS. STARLIMS has a built-in Azure container support, but it relies on a connection string with account information and all. But, like most Azure customers, that is not our reality. We use shared containers, so we need a SAS token… Which is not supported to configure as a STARLIMS connection string.
This means that next step will be any other web service consumption instead of direct containers access. Is it complex? Less than I expected!
Step 1: let’s get a SAS token!
Now, finding the said token is not always obvious, but mine looked something like this:
Hopefully, yours too! In the Azure Container tool, look for the “Shared Access Signature”, it’s the same thing.
Step 2 – integrate the Azure API!
Now, how do we put files there? The connection string and tutorials on STARLIMS will not help… But the web services will! All we need to do is write a UploadToAzureBlob procedure and a DownloadFromAzureBlob procedure (both in SSL) and that will do the trick:
As simple as that, you got yourself an upload and download to azure containers.
Conclusion
As you can see, as usual, this was quite easy! One just needs the correct information. Next step will be to see what more can containers bring to your STARLIMS installation.
This week was Abbott Informatics’ APAC Forum. Speaking with old colleagues, I got inspired to try to revive this site for the 4th time (or is it the 5th?).
I’m currently sitting in a performance enhancement session and thinking to myself: heh, that’s NOT how I would go about it (sorry guys!). Silver bullets? Nah.
The first step to improving performances is to identify what is slow (duh!). What are the bottleneck? Why is it slow?
As a STARLIMS developer, I know that oftentimes, the code written in there is not necessarily the most efficient one. Therefore, why not start by monitoring the performances of, let’s say, SSL scripts, which represent the backbone of the business layer?
I’m thinking: why not have a simple tool that will record, like a stop watch, all block of code execution time, and then provide a report I can read? Heck, .NET has a StopWatch class! Hey! STARLIMS IS .Net!
The more I think about it, the more I consider: let’s do it!
How do we do this?
First, let’s create a class. I like classes. I like object-oriented code. I like the way it looks in SSL afterward, and it makes it way easier to scale through inheritance later on. Question is: what should the class do?
Well, thinking out loud, I think I want it to do is something like this:
Start the stop watch
Do something
Monitor event 1
Do something
Monitor event 2
Do something else
Monitor event x
so on and so forth
Provide a readable report of all the event duration
I also want it to count the number of time an event run, and I want to know the AVERAGE time gone in there as well as the TOTAL time this event took.
Now that I know what I want, let’s write the class that will do it (for the sake of this example, I created it in the Utility global SSL category).
The above gives us an object we can start, restart, and monitor events (messages). At the end, we use typical ToString() and will have our “report”. Example of using this:
:DECLARE oPerformanceMonitor;
oPerformanceMonitor := CreateUdObject("SGS_Utility.perfMonitor");
lWait(1.3); /* fake doing something that takes time;
oPerformanceMonitor:Monitor('Step 1');
lWait(0.8); /* fake doing something that takes time;
oPerformanceMonitor:Monitor('Step 2');
lWait(1.1); /* fake doing something that takes time;
oPerformanceMonitor:Monitor('Step 3');
lWait(1.45); /* call Step 1 again to generate an aggregate;
oPerformanceMonitor:Monitor('Step 1');
:RETURN oPerformanceMonitor:ToString();
I have been using this in many places in our system and it did help me to find the best places to optimize our code. Sometimes, the same insert will run 500 times and will total up to 15 seconds; that is worse than one call that runs only once and take 3 seconds (at least for the end user).
I hope this can help you find the bottlenecks of your SSL code!
Alright, it’s been a while, and this time around, I have something good! I have this situation where I want to take a screenshot programmatically from a web page. Although there are many examples out there for thumbnails, none really matched what I needed. I wanted to provide:
A maximum Width
A maximum Height
URL
Header (in case you need a special authentication header)
Turns out that everything was out there, scattered pieces here and bits there.
So I tinkered the whole thing together, and came up with something that works actually quite good! I still have one challenge left though: how can I put that in a DLL I can reuse and not get a threading issue? That is the question.
Here’s the code – pretty much self explanatory.
/// <summary>
/// Take a snapshot of a web page. Image will be truncated to the smallest of
/// - smallest between rendered width and maximum width
/// - smallest between rendered height and maximum heigth
/// </summary>
/// <param name="webUrl">URL to take a snapshot from</param>
/// <param name="authHeader">Authentication header if needed</param>
/// <param name="maxWidth">Maximum width of the screenshot</param>
/// <param name="maxHeight">Maximum height of the screenshot</param>
/// <param name="output">output image file name</param>
[STAThread]
static void WebPageSnapshot(string webUrl, string authHeader, int maxWidth, int maxHeight, string output)
{
Uri uri = new Uri(webUrl);
WebBrowser browser = new WebBrowser();
browser.Size = new System.Drawing.Size(maxWidth, maxHeight);
browser.ScrollBarsEnabled = false;
browser.Navigate(uri, "", null, authHeader);
// This is what will make this render completely
while (browser.ReadyState != WebBrowserReadyState.Complete)
{
Application.DoEvents();
}
using (Bitmap bitmap = new Bitmap(width, height))
{
Point location = webBrowser.Location;
webBrowser.DrawToBitmap(bitmap, new Rectangle(location.X, location.Y, webBrowser.Width, webBrowser.Height));
bitmap.Save(output, ImageFormat.Png);
}
}
This is my new site. Welcome! I have closed the coffee store as I don’t have time to maintain it nor to supply for demand. Therefore, I will use this space for my professional self, for things that are interesting and not under the confidencial seal.