When all the toes are set and the final results are calculated the ‘fun’ part can begin! Now we get to analyze the results and draw smart conclusion based upon them. Note that some results are calculated based upon the average contacts (like the pressure per zone, the location of the zones and the foot axis) while the rest are calculated for each individual contact and then averaged.
The average pressure over time with a standard deviation (N/cm^2). This is basically the sum of all the sensors divided by the surface of all the activated sensors.
Note that the surface is probably an overestimation, because even when a sensor is partially loaded, it will still count the entire surface. There are some ways to counter this, but probably a better way would be to ignore the really low values (<0.1 N/sensor). Interestingly enough the values remain pretty constant once they reach their maximum.
The average force over time (N), which is simply the sum of all the activated sensors.
In humans we generally see an M-shaped curve (purple line), which is due to the ‘rockers’ in the human foot. The first peak is caused by loading the the rear foot (white line), the second peak is caused by shifting the weight towards the forefoot (green line) in order to push off. In the dogs case its just one peak, this most likely has to do with the way quadrupeds walk.
The average surface over time (cm^2). The surface is almost the reversed of the pressure, the dogs tend to put down their paws very flatly (all toes making contact) and don’t take start taking them off until late (> 60%) in the stance phase. The lift off also seems to happen pretty evenly in most cases, where first the central toe, then the medial and lateral toes and lastly the two front toes are lifted off.
The average force (not pressure!) for each ‘toe’ or 2×2 zone (N/cm^2).
Blue is the central/rear toe, then from medial to lateral: green, red, light blue and purple. Because the surface is the same, you can easily compare the forces. The vertical lines depict when the central toe reaches its maximal pressure and when its lifted off. This is analogous to the phases of gait as described by Willems et al (2004, pdf link), which describe very reliable phases during the roll off of a human foot. Starting at the landing of the heel to the first contact of a metatarsal to the contact of all metatarsals to the lifting of the heel and eventually the foot.
As you can see there are a lot of similarities between the left and right paws and also within the paw the force values within each toe are very similar.
The center of pressure plotted on an image of the maximal values of each sensor.
It seems every paw first lands more on the lateral side, only to stabilize somewhere in the middle. In most cases the line is so straight which indicates there’s a very good balance between the medial and lateral side of the paw. Imagine the ankle as a weight scale, where the left and right side are in a constant battle to keep the scale in balance. In order to keep them in balance it requires the muscles on both sides of the paw need to contract with the right amount of force. I expect lame dogs, that may be lacking this balance, will show a completely different center of pressure.
The location of the toes (manually set by me).
The size is fixed at the moment, though technically I could reduce it to 1×1 or scale it up for larger animals. Note that the location in the images may be slightly off, because the interpolation I use (scipy’s map_coordinates) slightly translates the paw, which frustrated me to no end. I haven’t found a solid solution for this sadly.
The paw axis, from the central toe to a point between the two central toes.
In humans I believe the angle is between 2 and 12 degrees, where a positive angle means the paw is exorotated. Now my definition isn’t perfect, especially because I found some large variations in the shape and loading pattern of the central toe. However, I think it will help find extreme outliers in either direction.
The step length, width, duration for each paw compared to the other paws with an image of the relative positions of each paw
I’d love to figure out how to change the image with the relative positions to an animation that shows how the paws are positioned to each other over time as well. Especially for the running trials, it looks like the paws land very close to each other and you can’t really imagine what this means in practice. Still the current image does help visualize how large the steps and strides are, if the dog were lame on one paw it would probably have an asymmetrical step length and easily stand out.
And lastly a dashboard with some useful stats.
For Pressure/Force/Surface I calculated the maximal value, the percentage of the stance phase where the maximum was found and the ratio between left and right. I first used an Asymmetry Index (ASI), but I found the values much harder to interpret and probably only make sense if you can compare them between populations. For each of the forces per zone I calculated the same, with the exception that the percentages at the end aren’t left vs right, but the ratio between the toes (which seemed much more useful).
At the bottom you find the axis in degrees, where exorotation is positive. The ‘Timing’ next to it gives the step length, width and step duration just comparing the left vs right paws (step) or the paw with itself (stride).
I actually already made changes to the version you see above, because I initially had the id’s of the two protocols (running and walking) hardcoded in my calculations so I was sure everything would work. But it turned out to be fairly trivial to made it generic enough to allow any protocol id. The list of measurements on the side is therefore replaced with a list of protocols, so you can easily switch between them.0 Comments
After several months of pain, sweat and tears I’ve finally wrapped up a first alpha version of my app! For this initial version I mainly focused on getting features working in a semi-usable way, which means it may look rough around the edges, but it gets the job done and in most cases relatively fast. As some of you may remember, we’ve measured over 30 dogs with each 24 measurements. Each measurement containing anything between 6 to 12 contacts, depending on the size of the dog. This leads to a total of about 7000 contacts(!) which I’ve manually assigned with labels for each paw (Left Front, Left Hind, Right Front and Right Hind). On top of that I manually assigned the location of each of the five toes, though I cheated by only calculating this for an average contact based on whether the dog was running or walking. Still this means 25 dogs, 2 types of trials, 4 contacts and 5 toes totally in around a 1000 toe positions. Note that the amount of dogs slightly reduced, because some of them were so small or light that it was impossible to discern any toes.
Now I bet you’re curious what this all looks like! Well I won’t keep you waiting any longer.
I’m a huge fan of Microsoft’s Ribbon and luckily wxPython had its own version. So I was quick to add one myself, because it allows me to use tabs to switch between logical sections of my app and gives me large icons which make for easier clicking (due to Fitts’Law). While the current 48px are probably a bit overkill I’m still fairly happy with them. The only thing that bothers me is all the stock icons and the depressing amount of duplication. Worse, because I compressed so many functionality into one screen I’m also stuck with a huge amount of buttons.
Basically it reminds me of this:
Image from Stuffthathappens.com
Now this isn’t a fair comparison because I wish you good luck trying to insert subject info or manage a database with just one button, but that’s not to say there isn’t room for improvement.
As you can see, the main tab consists of 4 elements: - Searching the database - Adding subjects to the database - Creating a medical history (anamnesis) based on tags (work in progress) - Creating a session and adding measurements to the session
I think I’m going to reorganize the panel so when you start you basically get a Google like interface: search for a subject and you can get started. If you need to analyze new data then you press a button to add a subject and the other relevant buttons appear (making the ribbon context sensitive like Office) and display the panel to insert the data. Finished inserting a subject? Great we switch to the panel to add measurements. Want to add a more detailed medical history, switch the panels to display just those panels and adjust the ribbon.
Another reason for wanting to change the main tab is that this was literally the first code I wrote, which means it’s horrible. Even though I tried to maintain some sense of a MVC structure, but due to my limited experience I failed pretty badly and a lot of functions need to be untangled.
Processing the measurements
Now on to the more interesting stuff: processing the data.
Again the Ribbon is crazy crowded at the moment; this is because there are so many actions required to allow for a flawless and usable paw annotation. Imagine this:
- Search for contacts
- Refresh loads the average contacts from the database
- Remove any contacts that are ‘incomplete’
- Save the results when you’re done
- Delete contact removes it from the list
- Previous/Next Contact let you switch between contacts
- Undo it if you make a mistake
- Delete all the contacts in case they are saved the wrong way
- Marking the four contacts
- Add a protocol so we can discern between measurements
- My magic eight ball to parse measurement names into protocols
- Cancel protocol set’s all the choices back to default in case you make an error
- Delete all the protocols in case you made a mistake
Then there’s a couple of buttons which could be made context sensitive, because they aren’t needed until you need to assign the zone locations.
- Find zones button was supposed to mark any zones it could find
- Add a zone (moving is done with the keyboard arrows)
- Save the zone locations in the database
- Undo all the zones in case you made a mistake (before assigning)
- Delete the zone
As you can see there’s a somewhat recurring pattern: create -> store -> delete
Perhaps I could ‘streamline’ this process by making the program assume what the user wants to do after certain actions, but honestly I think that’s far too error prone and we’re talking about science here. Shuttles have exploded for errors like this and we don’t want to fit a pair of orthotics based on erroneous data.
Besides, what computer geek honestly uses buttons anyway? I already added several keyboard accelerators to make very easy to be more productive. Ctrl+F to search, CTRL+O to remove incomplete contacts, 7 (Left Front), 9 (Right Front), 1 (Left Hind), 3 (Right Hind) which map to the anatomical order of the paws :
Ctrl+S to save all the annotations to the database and rinse and repeat. In case there’s a contact you don’t want to have stored, simply don’t annotate it and it won’t get saved. When you save a measurement, it will automagically load the next measurement, to reduce any additional redundant key presses.
How can I improve my paw detection?
Off course, I couldn’t have made this feature without the help of Joe Kington, my Stack Overflow hero, who gave this awesome answer that helped me find and sort my paws. While I personally find the second answer more impressive, I didn’t end up using his principal component analysis, because in its current form it doesn’t perform much better than chance. However, based on the results I’ve gathered so far, I should be able to come up with additional heuristics to make the algorithm perform better.
This also shows why it’s so great to have an application wrapped around all the scripts I had in the beginning, I can extend existing functionality without having to redo a lot of work, because a lot of the foundation required is already in place. For example, the GUI allows me to add multiple panels, which let me easily switch between different views of the same data. If I didn’t have a GUI, I would have had to create several figures or switch between them with a command line interface. You can imagine that performing such tasks are very error prone and quite tedious if you constantly have to pass slightly different arguments to display the data you want.
Back to Joe’s code: his find_paws function while dead simple totally did the job:
def find_paws(data, smooth_radius=5, threshold=0.0001): data = sp.ndimage.uniform_filter(data, smooth_radius) thresh = data > threshold filled = sp.ndimage.morphology.binary_fill_holes(thresh) coded_paws, num_paws = sp.ndimage.label(filled) data_slices = sp.ndimage.find_objects(coded_paws) return object_slices
Now it’s not flawless, as you can see with the contact in the middle, it’s larger than the smooth radius used by the uniform filter. The result is that it recognizes them as separate contacts. The problem gets worse with human feet when any midfoot contact is lacking, because then it will split up the foot into a rear foot and a forefoot.
While I could make the smoothing radius scale with the size of the dog (or the weight for instance) this is a slippery slope. When the dog is running fast, the front and rear paws land on nearly the same spot almost at the same time. If the smoothing radius is too large, it might ‘merge’ these two prints. As with most algorithms, when you try to optimize for certain cases it’s bound to perform worse in other cases. The ‘easy’ way out is to make it as easy as possible to manually correct the algorithm and using the adjustments as feedback for future corrections.
When you search for the contacts, every contact gets highlighted by a white square. As soon as a contact is annotated with appropriate paw, its color coded accordingly. On the left we see a list of all the found contacts, which shows the duration of the contact (in frames), maximal force (in Newtons) and the maximal size of the surface (in cm). Originally I thought this list might be useful so you can see the differences between contacts, but I found I never looked at the list.
Instead, I’d use the comparing tool at the bottom. What it does is compare the currently selected contact (yellow square) with average representations of the already annotated contacts. It also predicts what contact it probably is based on a simple subtraction: subtract the selected contact pixel by pixel from the other contacts, the one with the smallest difference is likely to match it most likely. I’m sure there’s a better comparison, like using PCA, compare all the frames, not just the maximal values or comparing multiple values, but honestly I found that after annotating 2/3 contacts the comparison would do a pretty good job and otherwise even I had a hard to ‘guesstimate’ which contact is resembled most.
Something that might be of use, like someone suggested on Stack Overflow, is using Inverse Kinematics because if two paws are in contact with the ground, the next contact can’t be one of those two. This should greatly reduce the amount of options at any given point in time. Furthermore, in a lot of cases there’s a clear pattern in which the paws are placed, i.e. Right Front, Left Hind, Left Front, Right Hind etc. One might even wonder if the measurements where this pattern doesn’t occur was even a valid one. Obviously, this doesn’t hold in all cases, because some dogs inherently walk seemingly random, alternating between a normal and an amble pattern.
Another important feature is the adding of protocols, which is basically a way to tag measurements. For instance when I try to fit someone a pair of running shoes or orthotics, I’d generally measure them both barefooted and shod, walking and running and wearing several shoe models or variations of an orthotic. This means I can’t calculate an average over all these different ‘protocols’, because they have very different results. My current version can only sort measurements based on one protocol and while technically I could add a slew of protocols like ‘Barefoot Walking’ to create what I want, but you can understand the list can become very long if I’d need to make combinations for every type or brand of shoe…
Peak detection in a 2D array
After processing all the measurements, the results get calculated. One of the results is calculating an average contact for each paw with the same protocol. This has several advantages, because loading all 3D arrays for each contact with the same protocol is quite data intensive. Furthermore, I’m mostly interested in average results not in single contacts, so I might as well compute the average up front. Now I understand this is an enormous data reduction, but as you’ll see next it comes with a similarly enormous advantage: I only have to set the toe’s positions (zones) once per paw!
You can image that with 4 paws and 5 zones and 2 protocols, it would become very laborious if I’d have to set the zones for every contact with the same protocol. Now I only need to set 40 zones, whereas else it would have probably been at least 200 per dog(!).
Now there might be a lot of improvements I could make with regards to the averaging. Currently I’m not making any corrections, while slight paw rotations and shape differences effect the average. Rotations mean the toes aren’t aligned and it might increase the average’s values between the toes and lower those of the toes. The shape differences, for instance when the rear toe wasn’t fully loaded, would result in a ‘shorter’ contact. Since the arrays start behind the contact, this means the toes would end up half way through the average contact.
Again though, it would probably be a better decision to ignore any extreme outliers, unless the variation is more the rule than the exception. One possible solution would be to rotate and translate the contacts to have the optimal overlap. An interesting method I’d love to use is this form of shape matching, where any shape is transformed to a time series. I can then figure out how to transform the contact so they get a better overlap, which in this case would be the smallest Euclidean distance. Obviously thinking up ways to do so is easier said than done, but these calculations can be used for other purposes down the line as well, like tracking motion.
Because the story was getting so long, I’ve cut it in two pieces. So read on for part two where I talk about the results!0 Comments
It’s been nearly three months since my last blog post. I had asked a question asking about how to insert data into a MySQL database, because I needed to store my measurements. Today I’ll take you on a small journey and show what I’ve done since then.
Starting with the database
I started out with a wxPyton example that allowed me to fill in basic database info, like the subjects name + address.
Then I tried inserting this into MySQL when you pressed the Save button, which required me to learn how to retrieve values from text controls and trigger functions when pressing buttons.
Adding a Ribbon
Next I added a Ribbon, because I just needed to have one. As you’ll see later on, my application has clearly separated parts, therefore it makes no sense for those parts to share buttons. The Ribbon takes care of that.
Moreover, I learned how to add other segments to a Panel (and why a Panel != a Frame) and I got to mess with Sizers to divide everything on a Panel.
Adding a search box
Now that I could add subjects to my database, I needed a way to search through them. So I added a Search control and a list to display them.
I learned a new thing! If you don’t add unique constraints, you’ll get a load of duplicates subjects in your database… I also looked into a tutorial that explained how I could ‘switch’ between panels. Basically you add all panels to the main frame, set one of Show and all the others to Hide. Then based on some input, you switch the other frames to Hide and the next one to Show. In my case, switching tabs on the Ribbon triggers this function. Neat!
Adding measurements to the database
Now my application was already getting more complex. I had several classes for all the different panels, added icons to each of the buttons and moved every button from the panel to the Ribbon. I also added a Session panel, where you create a session and add all the different measurements that belong to that same session.
Small detail: I switched from using list controls to ObjectListViews, because it creates list controls from model objects. Kind of like an ORM I guess. It took some getting used to, but after figuring out how to do things like returning the selected object, I like it a lot.
On to processing the measurements
Oh how I love rapid iterations! I made a first version of the Processing panel that should display a list of all the measurements and an image of the entire plate.
Fast forward, I got added a list with all the contacts that were found in the measurement, I rotated the image so its horizontal so I had space to display average contacts below it. I had some ‘scaling’ issues, because I still didn’t understand sizers 100%. But finally I got more or less what I intended: an average image of the 4 different contacts (LF, LH, RF, RH) and in the middle the currently selected contact.
I also encountered a peculiar behavior, where the Ribbon Panel won’t display any buttons until you actually the ribbonbuttonbar contains at least three buttons… Very strange! While this was easily ‘solved’ by adding a temporary dummy button, a much better solution was adding keyboard accelerators (see snippet).
wx.AcceleratorTable([(wx.ACCEL_CTRL, ord('S'), ID_SAVE), (wx.ACCEL_CTRL, ord('N'), ID_ADD), (wx.ACCEL_CTRL, ord('F'), ID_SEARCH)])
I’ve added keyboard shortcuts for nearly every important function, which also saves me a lot of time when I have to test a new piece of code.
We want to see results!
While the processing panel was far from done, I wanted to visualize the results in some way to see whether I was actually doing a good job on the contact annotating front.
So I added panels to visualize the sum of the pressure over time, temporal and spatial information and an average contact for the four paws.
Work in progress!
After that I went back to work on the processing panel. While there have been some intermediate changes, this is more or less its current state. The currently selected contact is highlighted with a yellow line, the other contacts are white when unassigned, green (LF), cyan (LH), red (RF), magenta (RH) when assigned to a paw.
While in earlier versions you could assign contacts by clicking the average image below. However, the only way I could trigger an event from this was by using a child focus event. This had the unfortunate side-effect that it got triggered several times when focus switched and I couldn’t find a good solution for managing this. In the end I decided to add dedicated toolbar buttons and keyboard shortcuts, which work out fine. Just like in my Paw Annotator 1.0 you can assign contacts by pressing 7 (LF), 9 (RF), 1 (LH), 3 (RH) on the keypad, which maps nicely to the layout of the paws themselves.
Previous iterations only allowed you to annotate contacts, which means that if you made a mistake, you had to start all over! By now, you can undo them (Ctrl+Z) and cycle through the contacts with <—+–> and through the measurements with arrow up and down.
Deleting measurements was tricky at first as well, because a measurement has contacts and measurement data. The contacts have results tied to them. And based on one of the Stack Overflow answers, I started out using MyIsam instead of InnoDB. This means I don’t have any foreign keys to enforce the relation. So deleting meant some double checking before actually deleting things, because else I might end up with rough data in my database!
I also added numbers next to the average contacts, these help give you an idea which paw it might be based on the size, the contact time and the maximal force. It also features a prediction, which suggests what paw it might be. However, this is currently based on a simple equation where the current contacts is subtracted from the average contacts, assuming the difference is that smallest between the contact it belongs to. Obviously this is open for improvement, based on the results I hope to find!
Another thing I added are protocols (and a medical history to the database screen). These are basically ways to label the measurements, so they can be categorized in the analysis. Examples of this are separating walking and running measurements or in humans barefooted vs shod measurements. The current implementation either let’s you pick several labels manually or use a ‘profile’, which is basically a collecting of regularly used labels. Because I’m lazy I added a Magic Eight Ball button, which parses my measurement names and assigns the correct profile. Sadly for my users, I haven’t figured out how to let them tweak this through a simple GUI.
Lot’s of database stuff
Don’t mind the icons, it’s already starting to become too clunky for my liking and perhaps I’ll find alternative ways to manage these functions. I’m also not 100% happy with the current interface, because there’s so many lists and text controls that it get’s confusing to what to do first.
However, compared to the previous version, you have a list of sessions that are attached to a subject. Each session consist of measurements and like the protocols, there’s now an option to add a medical history (anamnesis). This will also be used to categorize the results in a more comprehensible way, though if it get’s more complex, I think it will need a wizard to be really useful in a clinical setting.
I actually ‘wasted’ quite some time to make this as user friendly as possible, by implementing an autocomplete that displays any match with the options from the list. Each list is connected to the one on its left hand side, so Primary problems has six subcategories, System diseases may have yet another amount of subcategories and eventually you get to say how severe it is or where something is located. Please note: this is work in progress!
Again, while making this I ran into some headache inducing problems, because while the list is very complete for orthopedic shoemakers/podiatrists, that doesn’t really make it useful for veterinarians. So again I need to come up with a way to allow the user to alter these lists, back them up and restore them when they update the software… Sigh
We want more results!
While its nice that the application now allows users to add data to subjects and annotate all the contacts, in the end they want to be able to analyze them. So I spent some more time getting the results in better shape. Since currently the results aren’t separated based on their protocol (yet!), these results are an average over all the trials, which well… can give some strange results.
Here’s an example of the temporal-spatial screen, where you get the step length, width and time for each paw relative to itself and the other 3 paws. Below I’ve tried to recreate an entire plate image, based on the step length and width, so you get an idea of the walking pattern.
I tried a different way of calculation the foot/paw axis, based on Friso Hagman’s calculations, since my own experiments didn’t bare any fruit, I turned to Stack Overflow once more: How to calculate the axis of orientation? While Joe did a great job of implementing the calculation, the shape of the paws turn out to be pretty problematic. As Joe puts it: “In other words, a dog’s paw is close to round, and they appear put most of their weight on their toes, so the “back” toe is weighted less heavily than the font in this calculation. Because of that, the axis that we get isn’t going to consistently have a relationship to the position of the “back” toe vs. the front toes.” To spare myself the humilation, I’m leaving my current version out :-P
While my center of pressure calculations are most likely correct, or at least were before I implemented them in my application, but as the following image shows:
apparently I’m displaying certain data upside down. So I’m wondering whether my horizontal axis is really correct and if the center of pressure and average contact even have the same orientation… I’ll probably have to try it on a human measurement, so I have a better idea of what it should look like.
Because the foot axis isn’t working as intended, I haven’t worked on my toe detection yet. I was originally hoping to use the foot axis to rotate the contacts to a neutral position, that way I can make a much better estimation of where the toes should be: two front toes on either side and on around the axis on the rear. Ironically, a better toe detection would allow me to do a better foot axis calculation, so I feel like a dog that’s chasing its own tail.
Power to the population
One the things I had in mind with the results was that I wanted to compare it with ‘normal data’, so you get a better sense of whether the dogs individual results are (ab)normal. While we’ve started to get some feeling for this in humans, this simply hasn’t been done for dogs.
This creates several problems: I don’t have any curated ‘normal data’ yet. Off course that’s the purpose of this project, but that means I’m aiming at a moving target. Another problem comes from differences within the population, there are differences in weight, in walking patterns, the lack of distinguishable toes and several unknown factors that may cause large deviations within my clusters. This causes several issues: Should I normalize the pressures so I can compare small and large dogs or are there other factors that make this comparison useless? Does the difference in walking patterns have a significant impact on the pressure distribution or is the effect neglible? How should I compare contacts with and without distinguishable contacts, should I guesstimate their location for a comparison or not?
While its not a problem to go down each path, calculation these different solutions for highly dimensional data (5 weight groups, 2 walking speed, 4 paws) is quite cumbersome, different for most of my results (1 value, 8 values, 1D arrays, 2D arrays etc.) so you can see I can easily waste a week chasing the wrong idea. Another issue is that I need to display all these results in a sensible way and maintain a usable GUI to switch between the different modes. Its clear that there are still some challenges ahead!
Given that I only added my population based results last week, there’s not much interesting to show you. But here’s an example of the temporal-spatial results, based on the weight categories and walking speeds.
Currently all the population results are being calculated every time I need them, since I haven’t decided on a definitive format yet. I do plan to store the average + deviations for every dog, for each atomic grouping. In this case I would end up with two results for each dog: its walking and its running results, that way I see what the most sensible weight categories are and try to figure out what walking patterns there are.
Please keep in mind that three months ago, I had never worked with MySQL or wxPython before. My only programming experience was basic data analysis, yet here we are with a first rough version of an application. It now consist of little over 100 Python-modules, which allow anyone with a MySQL server to run it fairly easily. I have a lot of challenges still ahead of me, but I’m certain that it will be interesting to see what my application will look like in another three months from now!0 Comments
I’ve been meddling with MySQL for Python the past week to make my data more sustainable then just keeping it in RAM and losing all my data when I shut down Python… I’ll reserve my horrors of installing MySQL for some other post, but everything is working now. So the first thing I tried was taking the ASCII exports with the pressure data and get them in my database.
The ASCII export can come in two flavors, I wish it didn’t but I’m 100% sure that if I focus on the right format, tomorrow I’ll get someone who uses the left… Either way, currently I parse the file using Joe Kingston’s code he kindly supplied earlier (follow this link for the code). This strips the headers off and eventually puts the data in a numpy array.
class Datafile(object): """ Reads in the results of a single measurement. Expects an ascii file of timesteps formatted similar to this: Frame 0 (0.00 ms) 0.0 0.0 0.0 0.0 0.0 0.0 Frame 1 (0.53 ms) 0.0 0.0 0.0 0.0 0.0 0.0 ... """ def __init__(self, filename): self.filename = filename def __iter__(self): """Iterates over timesteps. Yields a time and a pressure array.""" def read_frame(infile): """Reads a frame from the infile.""" frame_header = infile.next().strip().split() time = float(frame_header[-2][1:]) data =  while True: line = infile.next().strip().split() if line == : break data.append(line) return time, np.array(data, dtype=np.float32) with open(self.filename) as infile: while True: yield read_frame(infile) def load(self): """Reads all data in the datafile. Returns an array of times for each slice, and a 3D array of pressure data with shape (nx, ny, ntimes).""" times, dataslices = ,  for time, data in self: times.append(time) dataslices.append(data) return np.array(times, dtype=np.float32), np.dstack(dataslices)
(Note: I've long since rewritten this part, to make it more suit my needs)
Then in my best newbish SQL I created a connection to the database:
mydb = MySQLdb.connect('localhost','ivo',’*******’,'data') cur = mydb.cursor()
Then I did a standard insert statement for every value:
ny, nx, nz = np.shape(data) query = """INSERT INTO `data` (frame, sensor_row, sensor_col, value) VALUES (%s, %s, %s, %s)""" for frames in range(nz): for rows in range(ny): for cols in range(nx): cursor.execute(query, (frames, rows, cols, data[rows,cols,frames]))
Now I knew this wasn’t efficient, but I just wanted to make sure ‘it worked’. But taking 6 minutes is off course unacceptable, though I was trying to insert 4.000.000 values, what was I expecting? Anyway, after reading another chapter from MySQL for Python, I learned about executemany() which instead of having a separate insert for each value, takes a tuple with all the data you want to insert and batch processes that.
Furthermore, I decided that it was far easier to ditch all the zeros from my data (as you can see above, it’s over 99% of the data…), so I added a simple if statement to get rid of them.
query = """INSERT INTO `data` (frame, sensor_row, sensor_col, value) VALUES (%s, %s, %s, %s ) """ values =  for frames in range(nz): for rows in range(ny): for cols in range(nx): if data[rows,cols,frames] > 0.0: values.append((frames, rows, cols, data[rows,cols,frames])) cur.executemany(query, values)
This magically reduced the entire processing time to about 20 seconds, of which 14 seconds are spend on building values, which is a list with 37k tuples with all the data. Still not very efficient, since it would take me 10 minutes to process all the data of one dog.
query = """INSERT INTO `data` (frame, sensor_row, sensor_col, value) VALUES (%s, %s, %s, %s ) """ values =  rows, cols, frames = numpy.nonzero(data) for row, col, frame in zip(rows, cols, frames): values.append((frame, row, col, data[row,col,frame])) cur.executemany(query, values)
Some suggested turning off indexing, while inserting the data. I’m 100% it helps, I just couldn’t see a noticeable impact. So for now I’ll just leave it out for simplicity sake.
But f00 pointed me to something else that was quite interesting: LOAD DATA INFILE, but for the life of me, I just couldn’t get it to parse the file correctly. However, if it would allow the database to load the file directly, that would take away any overhead from Python. Though potentially at the cost of keeping all those zeros… (perhaps it’s possible to get this data compressed?!?)
Anyway, f00 asked me to elaborate on:
but can you post a little more info about what you do with the data once loaded as it will determine the direction of my design. Posting any table definitions you have, numbers of patients, frequency of scans/measurements, typical queries
Well part of the problem is that I know so little of SQL that while I’ve tried thinking it through, I simply don’t know how to manage my data yet! However, the basic workflow would be:
You create a new subject, human, dog or whatever you want. Here the user will need to add information like name and address, but also any medical meta-data, like an anamnesis. These will go into separate tables. Then you add data (the ASCII file) to the subject, which needs to be stored in the database. Furthermore, you should also be able to lookup a subject already stored in the database later on to analyze the results or edit his data. For the current study it were 24 measurements per dog, but in normal cases I’d expect it to be about 10 measurements. Furthermore, there were about 30 subjects, but the clinic already measured over 100 additional subjects…
If you have new data, we process it. This means loading the data from the database, calculating new things like: where are the contacts, where do these contacts belong to and when you’re done all the results. These results need to be stored in the database as well, so each of these results will get their own table. For dogs, each measurement has about 8 contacts or more (up to 15-20); humans often would have anything between one-four contacts in one measurement.
Most of the time, measurements are done in several conditions. You either want to compare those measurements with each other or compare it with ‘normal data’. Looking at single contacts isn’t as valuable, due to variations, so I would query all the results from measurements with similar conditions. However, I don’t want to average running with walking, so I need to be able to pick a certain condition and calculate average results for those. Most results are values over time, like the progression of the total pressure, the center of pressure, the pressure for each toe or more ‘static’ values, such as the orientation/rotation of the paw, the moments of peak pressure etc.
Because we measured healthy dogs in this study, but have already measured a sizable amount of lame dogs, I would want to compare the measurements of a lame dog to the averages of my healthy dogs. I’m not 100% sure whether it’s better to calculate these values when I need them or to store them in my database, to be more efficient.
While this is mostly the same as analyzing: you make a selection of the data you want and I’ll output it to a file (csv or whatever would be the most useful) or a pdf report. So again, I need to be able to refine my queries to retrieve the right data.
This would probably result in the following tables (more or less):
This should make sure most of my data complies to those nice Normal Form rules.
- I believe most of the data is as atomic as it can be, the only duplicated information are the IDs that link tables and one time Measurement frequency (might even drop it from Contacts).
- My data ‘feels’ clustered in groups that belong to each other, anything that needs multiple rows for the same data is in it’s own table (protocol, side, zones, parameter, data).
- I guess I have to think about what my primary keys are, but I honestly believe it’s ok-ish.
I still don’t know what the heck to do with calculating averages. since it has to be used in daily practice and won’t have some powerful server to run on, I can’t calculate it again for every measurement. While I could make separate tables (or IDs) for averaged data, I’d still need to come up with a sensible way to calculate them and keep them up to date. Any suggestions are definitely welcome!
BTW for those who claim I shouldn’t bother with averaging: show me how to average a couple hundred 3D arrays (albeit small, 15x15x50), which need normalizing before they can even be averaged…0 Comments
While the goal of this project was simply to test the feasibility of using the pressure plate with dogs, but it was clear to me from the start that I needed to make an application out of it. There’s just one little problem… I couldn’t program one bit. Sure I was dangerous enough to do some data processing with Matlab, enough to earn a Masters degree even, but that hardly makes me a programmer. Stack Overflow reminds me of this of this fact every day.
But I was determined to make sure all the other researchers out there, don’t have to go through the same pain every single project. So I started learning Python in August 2010 and with the help of several Stack Overflow questions, I worked my way through several books, tutorials and other available resources to get closer to my goal. Now that I had managed to learn enough of the syntax to be even more dangerous than I had ever been in Matlab and got some very promising results, I got at a imaginary crossroad:
Get more projects and just stick to data processing, basically maintaining the status quo or dive into Python even deeper and finish what I started. Choosing the latter means that I won’t have any new projects until I’ve learned enough to create an app that’s good enough that new projects won’t need me around to do the processing.
Now I can already see some more seasoned programmers starting to look worried: here’s someone with no programming experience whatsoever wanting to learn programming in what? A matter of months? Well, true with the one difference that I’m working on it full-time and don’t have any other projects distracting me. So the first task I set my self: learn how to make a GUI!
With some help of my Super User (Fake-Programmer) buddies, I looked at several possible Python frameworks I could use for my GUI. My first goal was to set out and see if I could make the pressure plate ‘roll off’ on the screen. Pretty simple I would say: just draw a window, draw an image inside it and update that with every frame of the plate right? Well yes, but that doesn’t mean there’s a dozen of ways of doing things of which probably 90% is wrong and pretty darn slow!
I tried looking at Qt, but after Nokia decided to collaborate with Microsoft, that’s probably not going anywhere. Besides Qt is heavily C++ oriented, so I couldn’t make any sense of the documentation and any decent documentation for Python seemed to be lacking. I tried some PyGTK tutorials, only to not get the installation of the GTK part not worki... I guess the Linux guys really don’t like Windows all that much! (Note: I did manage to get it working 2 weeks later). I settled on wxPython, because first of all they have an awesome book, filled with ‘working’ examples and these are also all for download, so if something’s not working in your code, you can compare them to figure out your mistakes (and hopefully learn from them!)
While the book is $35 and already from 2006, I can definitely recommend it. It really takes you by the hand to explain the syntax and has clearly written examples. One of the things I appreciated the most is that their syntax is very, very consistent and doesn’t use any ‘fancy’ Python features that obfuscate what the code is supposed to do. Furthermore, it automatically uses the systems native theme, so while your app might look generic, at least it doesn’t look like it originated from Windows 95!
The only complaints I had were some typo’s in the code, which were pretty nasty to locate if you don’t know what the code is supposed to do exactly and the fact that multimedia + graphics were completely under highlighted. Yes, there are code examples that draw bitmaps, yes I can find working examples of an image viewer, but you know what the problem is with those? They’re all static! I want to show off my plate data at 60 fps. I have a fast pc, so the only reason I can’t animate this any faster is because the code I wrote sucks and the book didn’t help me solve that part!
It took me about two weeks to get through the book, because I literally typed out every code example in it and got sidetracked a couple of times when I learned something new and wanted to try that out on my own data. After reading this book, I do feel more comfortable of making a GUI around my code. Now my only worry is the object-oriented part and making sure I design my framework right, so I don’t realize I have to rewrite my code half way through…
To make sure I know what I’m doing, I’m now rereading the chapters on Classes in Learning Python. I can already say that everything makes a lot more sense after having actually used classes in practice. Depending on how confident I am after reading this, I might also read the relevant chapters from Dive into Python or another ‘beginners’ book.
I think knowing classes is vital for developing my app, because though I refactored my code halfway through, I still ended up with one long script that would process all my data at once and stores nothing. You’d almost think I learned nothing… So now I’m planning to properly dissect my code, so I can process the data in a reliable fashion (no crashing half way!) and hopefully also more persistent. Ideally, if all the code is correct, the data would be loaded, processed and the results stored in a database. Every time you’d want to view or analyze the results, you’d simply call the results you needed, without the need of processing every trial of every dog in the process…
So as you can see I, still have a long way ahead of me, but I feel I’ve gotten at least a step closer to my goal!0 Comments