MeDRA (Medical Dictionary for Regulatory Activities) is an internationally used medical dictionary developed by the ICH in the 1990s which will be widely utilized during pharmaceutical regulatory processes. One of its scopes of use is the data coding of adverse events and adverse reactions. MedRA has been translated into English, Japanese, Czech, Dutch, French, German, Hungarian, Italian, Portuguese and Spanish.
The great advantage of MedDRA is that it organises adverse events reported by clinical investigators into a standard format, making it possible to discover groups and relationships between cases that seem unique at first. This can be used for statistical reporting purposes during the creation of tables and listings. MedRA is structured into various hierarchical groups, arranged from very specific to very general. Based on its hierarcy, a specific event is listed under various connecting groups. The hierarchical groups are as follows:
Among the groups, SOC (System Organ Classes) includes the most general terms, while LLT (Lowest Level Terms) describe fully specific events. Beyond the scope of a given analysis, MedRA has contributed to the standardization of medical databases and hence to a better assessment of diseases.
Do you keep hearing about cloud hosting and wonder how it differs from a regular hosting plan? Perhaps you just built a new site and are wondering if you should give the cloud a try?
Cloud hosting is a more reliable, scalable, and secure option than a regular shared hosting plan. But shared hosting is, usually, cheaper and easier to set up.
In this article, Lucero Del Alba will cover everything from control panel options, migration issues, and the pros and cons of each option. We’ll get to see what each option is about and —hopefully! — help to you decide whether it’s better for you to stick with shared hosting, or if you should switch to a cloud plan.
Once Upon a Time on a Shared Hosting Plan …
Traditionally, when we needed to put a site online, we’d buy a domain, get a hosting plan, and FTP the site from our computer to the web. We grew so used to it that it became second nature.
We would typically have features such as a very comprehensive control panel, statistics, and email hosting for the domains registered on that account, among other things. But also some hard limitations, such as a certain amount of disk space, a given bandwidth, and a fraction of the CPU and the server memory.
For many brochure, portfolio, blog and small business sites, that’s perfectly adequate. But for many businesses, it’s not ideal. And even for a freelancer maintaining a couple of simple sites, it’s possible to run out of resources for a given site from time to time. (It’s no fun being asked by a client why the site is down.)
The VPS and Dedicated Server
One way of upgrading is to buy a bigger, slightly more expensive plan with a little more resources, in the form of a VPS (virtual private server). And if that doesn’t cut it, you can rent a dedicated server — that is, a full rack on a hosting company’s data center.
With a dedicated server, you get all of the server resources in a non-shared environment for, let’s say, $100 a month. Yes, about a 20x more expensive than a basic shared hosting plan — but hey, you wanted the whole thing, didn’t you?
Whether you’ve stuck with shared hosting or jumped into the world of the VPS or dedicated server, it has probably all worked just fine, and you may never have contemplated trying anything else. Believe it or not, though, there’s now a generation of web developers that barely know what FTP is, having never used it.
… and Then the Cloud Hosting Plan Came Up
When Amazon Web Services (AWS) was first introduced, everything was new and it seemed like you needed to take an intensive course before you were able to start operating with this cloud infrastructure.
But things have changed since then. Not only have more providers come onto the scene, but also more solutions that can be used out-of-the-box, including cloud hosting.
Continue reading on SitePoint!
As you probably know, there is a publicly available database wcich contains many information on majority of clinical trials – at least on trials with US-citizens – started in 1983.
In this post I try to show what information is stored in this database and how can you manage it with free statistical tools.
I give a detailed description on the ID-structure and give solutions for specific scientific questions.
The questions I try to answer with this small presentation:
- How to determine the number of “recruiting” sites, how to generate a list of cities with total number of recruiting facilities and how to plot the ‘Recruiting’ sites on a Google map.
The data can be downloaded from
With choosing pipe-delimited text files, you can easily read the content with any text-editor (I would recommend notepad++).
If you have some statistical background and especially you have access to SAS you can download SAS transport files as well.
After downloading a close to 2 GB zipped file, youl’ll get a set of 40 files.
One of the tools can be used for management of this files is R or its menu-driven version RStudio.
As it is stated on the webpage http://aact.ctti-clinicaltrials.org, you can easily read the downloaded files with the help of the code:
read.table(file = "id_information.txt", header = TRUE, sep = "|", na.strings = "", comment.char = "", quote = "\"", fill = FALSE, nrows = 200000)
The most important file is the Studies database ( open in new window ). You can find information – among others – on
last verification date
number of arms and groups.
The file contains data of more than 251 thousand studies (only the first 1000 can be found on our site).
Task 1: Answer the question how many open (overall status = ‘RECRUITING’) studies can be found tabulated by sites.
We have to lean on Facilities and Studies databases. The Facilities database – the 1st 1000 records – can be checked here.
To get the database containing both study and facility relevant data, you have to merge the two databases.
In R with the command
library(Hmisc) library(data.table) library(DT)studies <- read.table("DIR/studies.txt", header = TRUE, sep = "|", na.strings = "", comment.char = "", quote = "\"", fill = FALSE, nrows=5000) facilities <- read.table("DIR/facilities.txt", header = TRUE, sep = "|", na.strings = "", comment.char = "", quote = "\"", fill = FALSE, nrows=5000) sites <- merge(studies, facilities, by = "nct_id") my <- c("nct_id", "overall_status", "city", "state", "zip", "country", "name") sitesa <- sites[my] sitesa$city <- tolower(sitesa$city)
If you would like to have a table on sites with “recruting’ status, you can obtain a table like this:
with the commands:
datatable(setDT(sitesa_c_final)[, .N, by = .(overall_status,city)][order(-N)])
Or if you would like to demonstrate the status of the sites on a Google map? There is no problem, but I would recommend to change from RStudio to Knime.
If you would like to place the sites on a map you’ll need their exact coordinates. The good news is that this information is also available for free. You can download the necessary database from Maxmind site ( https://www.maxmind.com/en/free-world-cities-database ).
Addition of the coordinates to the database with cities can be done with the following code:
coords <- read.table("e:/_job/clinicaltrials.gov/worldcities/worldcitiespop.txt", header = TRUE, sep = ",", na.strings = "", comment.char = "", quote = "\"", fill = FALSE) sitesa_c <- merge(sitesa, coords, by.x = "city", by.y = "City") sitesa_c_final <- subset(sitesa_c, sitesa_c$overall_status == "Recruiting")
This sitesa_c_final table is given to KNIME, where the following actions should be done:
The outcome looks like this, where the shown sites (indicated by their names) indicate the sites with ‘Recruiting’ status.