x You must enable Federated Login Before for this application.
Google App Engine Control Panel -> Administration -> Application Settings -> Authentication Options

Flipse R&D - Blog

So what CAN you do?

Mon, 28 Jan 2013

Well good question! Definitely not improving Joe’s image recognition algorithm, though we did have some email conversations about possible ways to improve it. Probably the best way of improving it would be to have several training sets based on the actual walking pattern of the dog and it’s characteristics.

Obviously, it’s not fair to compare a small dog with a large dog.

Big dog vs little dog

So separating the data into smaller clusters that are more alike is definitely a way forward.

Another improvement is more a post-processing using heuristics, where each heuristic returns a probability of this paw really being a certain paw. As long as the probabilities are high enough, you can sort a larger portion with more precision.

What else is there to do? Well first off it would help to have some measurements correctly annotated that don’t follow the pattern used in the algorithm. To make this task easier, I’m considering making some simple GUI that shows the entire plate with all the located paws and their order of contact and then let’s you manually ‘select’ each step that’s incorrectly recognized and manually override the current annotation. And pray it doesn’t look like this:.

Lots of little steps with temporal information

Perhaps an easy version would highlight each paw in a measurement iteratively and ask for user input:

7 – Left Front       9 – Right Front

1 – Left Hind       3 – Right Hind

This would perhaps be a bit cumbersome, but perhaps I only have to perform it on 3-4 measurements per dog to help built up a better training set.

But perhaps the eventual results can be of help! You see, the heuristics are mostly based on some assumptions we have about what the data should look like. Well what better way is there to create new heuristics by letting the data do the talking?

So I started some number crunching using the measurements of one dog that we’re surprisingly well sorted by the current implementation.

First up: a histogram with the step durations of each paw

Step durations for each paw

What’s strange is the while the mean for each paw is around 350 ms, there are a couple of trials that are a lot shorter. So I decided to check if this was just random or if something else was the cause. Turns out: three trials make up all the trials with step durations below 300 ms! One was particularly notorious with 11 steps being faster than 300 ms. I can only hypothesize but my guess is that these trials came right after a trotting trial and the dog was still a bit overexcited.

When calculating the gait velocity (by substracting the coordinates of the first and last paw and dividing these by the time difference) I came to an average of 1.10+/- 0.11 m/s (about 4 km/u). But when you look at the three trials with the short step durations: 1.38, 1.26 and 1.12. So yes, the dog was clearly walking above the ‘average’ gait velocity in these trials.

What else have we got?

Well I wanted to go for something basic: step & stride length! In humans this is calculated by the distance between the heel strikes of the left and right foot. But obviously a dog has four paws, so maintaining this definition is a bit strange. I then decided to calculate the distance between each paw and all the other paws, but I ended up getting lost in which paws I was actually comparing…

In the end I gave up for now, getting frustrated with not figuring it out completely (I blame the coffee!), but I did manage to calculate the distance between consecutive paws of the same side, so front right to front hind and vice versa. But I do plan on getting better results for this one!

Step duration for both sides

The –20 can be explained by a mix up in the sorting order. The 100 is most likely explained by a missed step. Everything in between needs some more research to be explained (what part = Front-Hind and what part is Hind-Front).

Why am I mixing things up?

Overview of COP for multiple measurements

These are all the walking trials with the center of pressure (or mass) plotted over them. The center of pressure is calculated for each frame of the measurement and is a point where the weight of the body balances around.

The very sharp lines here indicate that a contact landed just on the edge of the plate often near the end of the measurement. However, these small contacts (which are often wrongly annotated too!) create an inconsistency in the pattern you normally expect and make in incredibly annoying to try and write algorithms from scratch. Next time, I’ll first check for a certain pattern and only try to calculate steps there, as then I can at least guarantee a correct result.

Now for something more ‘groundbreaking’! Most other pressure measurement systems used with dogs lack the resolution in sensor density to say anything about the distribution within a paw. However, as long as the dog isn’t a chihuahua, this system does perfectly fine!

So I returned to my first SO-question: Peak detection in a 2D-array Which helped me locate the five toes in a paw. So far so good, implementing the calculation wasn’t much of a problem. But visualizing it all the more! You see, I have about 60 impacts per paw, so when you make one graph out of it you get this: a whole bunch of spaghetti!

Sums of pressure over time

I decided to reduce this to an average (thick green line), to actually be able to compare them in a comprehensible way. But I ran into my next problem: not every trial has the same length. So if you want to do some nifty as using numpy’s built-in toe1.mean(axis=1), you can forgot it, because an array needs all it’s rows to have the same length… Ok so what do I do?

There’s two options: either I rescale every item to a standard length (hard way) or I just create a zero filled array and stuff each item in there. The latter basically pads the data with zeros at the end to make them the same length. Now this too is a very basic task, but every time I tried to stick a row of data in the new array I would get an error: shape mismatch. Argh, I have to tell the new array where the row starts (at zero, duh!) and ends (len(row)?), problem solved!

Not so hard you think eh?

But then we started looking at the results (different paw this time) and someone complained about the wobbly shape.

Average force over time with standard deviations

I blamed this on the fact that the data is based on one sensor, namely that of the maximum pressure in a certain area. Peaks are often peaks for a reason, they aren’t sustained for a long time. Also in 4 of the toes, there’s a (sharp) nail which when pressed to the ground will exert a sharp peak of pressure.

So how do we make the data a little bit less sensitive to this peak alone? Well we increase the area!

def detect_peaks(image):
    # define an 8-connected neighborhood
    neighborhood = generate_binary_structure(2,2)

    #apply the local maximum filter; all pixel of maximal value 
    #in their neighborhood are set to 1
    local_max = maximum_filter(image, footprint=neighborhood)==image
    #local_max is a mask that contains the peaks we are 
    #looking for, but also the background.
    #In order to isolate the peaks we must remove the background from the mask.

    #we create the mask of the background
    background = (image==0)

    #a little technicality: we must erode the background in order to 
    #successfully subtract it form local_max, otherwise a line will 
    #appear along the background border (artifact of the local maximum filter)
    eroded_background = binary_erosion(background, structure=neighborhood, border_value=1)

    #we obtain the final mask, containing only peaks, 
    #by removing the background from the local_max mask
    detected_peaks = local_max - eroded_background

    return detected_peaks

Here I was thinking the ‘generatebinarystructure(2,2)’ corresponded with my request of finding a 2×2 box around the peak pressure that shouldn’t be in contact with each other. You know, like so:

Raw data with the toe locations overlaid

But as I could see in my own results, I only had a 1×1 coordinate, that of the peak. Bummer! Ok, but if I have the coordinate of that point, can’t I just go and add +1 in x and y and call it a day? So I try to add a slice to my new toe data. But I learn something new about Python slicing I hadn’t thought off. Slicing counts from a point up to a certain value, not including the latter! This stuff just makes my day!

After getting this out of the way I got a new surprise, all the code I just had for padding zeros to fill up an array? Well that just got 4 times as worse, because instead of passing along 1D rows of data, now I was passing along lists with 2,2 shaped arrays. I couldn’t make any sense of indexing these buggers! All I wanted was to calculate a mean! So I thought screw it: I’m going to sum up these for rows for each frame, so that I’m back to getting just a 1D row. Hoeray you think eh?

Turns out, for the small toe the mean was zero, yes zero! Because the pixels I was adding actually didn’t have any pressure in them most of the time. Average that over a whole bunch of trials and you get: nothing! Back to the drawingbord…I decided to tweak the location of the index in a circle around the maximum and see what results this would give me. Here’s a snapshot where blue is the maximum:

Pressure per toe location

Clearly not using the other sensors would be overestimating the pressure in that area and not only that: the shapes aren’t the same either. So I tweaked the position, so that hopefully this effect has been reduced somewhat! The result of all this blood, sweat and tears:

Average pressure per toe for each paw

Interestingly, there’s a clear difference in the peak pressures between the front (top) and the hind (bottom) paws. Furthermore, I should note that I didn’t mirror the ordering of the toes from left to right. Currently it picks the rear toe as toe 1, because it (almost) always occurs as the first, spatially speaking. Then I sort the remaining coordinates in ascending order, toe 2-5 are then assigned in the order they occur. While this seems to work, it’s clearly not perfect, because it doesn’t take into account that if a paw rotates far enough, these sideway positions may change. In the future I should probably take into account the distance from each toe to toe 1 and see in what direction it points.

Segmentation of paw with axis

Something else I could consider in the future (using the lines in the picture above), is rotating each impact to put it in a neutral position, from here the toes should always be in the more or less the same area.

This angle of rotation is interesting for multiple reasons. It let’s me rotate paws back to a neutral position, which helps with the image recognition, because it reduces the variance between the paws. In humans certain problems are associated with excessive rotation of the foot (either internal or external).

Charlie Chaplin

And if you want to describe the amount pronation or supination (rotation of the foot in 3D), like in the image below, you need an axis to describe this rotation.

Rotations around ankle axis

In humans it turns out there’s a strong relation between the amount of pressure on the medial side (towards the body’s core) and internal rotation of the shank. So why wouldn’t it be the same with dogs? Therefore, if we can divide the paw into two halves, we can compare the pressure under both halves and hopefully say something smart about the movement of the leg above it. Considering the clinic wants to evaluate the gait pattern of lame dogs, I assume this will be very interesting!

Something else, I’d love to figure out is a classification of the roll off pattern. In humans we have the following:

Phases of the roll-off

Which describes five moments in the roll off: heel strike, initial forefoot contact, forefoot flat, heel off and toe off. These five moment literally occur in just about every healthy humans roll off, so now I’m left wondering what the equivalent is for dogs.

From the graph with the pressure under the toes at least one phase can be deduced and that’s ‘heel’ off as far as you can call the fifth toe in dogs a heel. I’m sure there must be other moments we can deduce from this. If anybody has any suggestions, leave a comment!

Sums over time with average

Here’s the total pressure under each of the four paws, where the thick blue line is again the average. Perhaps that first bump can be interpreted as initial forefoot contact or even already foot flat. The second bump is probably preceded by the heel lift, which is probably a more reliable measure.

Anyway, I think I’ve made some progression getting these results out of these measurements. There are a few kinks I need to solve, but after that I will try to apply it to other measurements. Perhaps one idea would be to calculate everything for each impact and cluster them according all the results and then see how many ‘false’ positives we get!

For everyone who made it this far, feel free to leave a comment if you have any questions or want more information!

0 Comments

Face it: You know nothing about Face Recognition!

Mon, 28 Jan 2013

In my previous post I was a little skeptical if I would get an answer, only to find out I got an awesome one the day after (thank you Joe Kington!).

Here’s his own overview of the answer:

There are essentially two ways to approach the problem, as you noted in your question. I’m actually going to use both in different ways. Use the (temporal and spatial) order of the paw impacts to determine which paw is which. Try to identify the “pawprint” based purely on its shape. Basically, the first method works with the dog’s paws follow the trapezoidal-like pattern shown in Ivo’s question above, but fails whenever the paws don’t follow that pattern. It’s fairly easy to programatically detect when it doesn’t work.

Therefore, we can use the measurements where it did work to build up a training dataset (of ~2000 paw impacts from ~30 different dogs) to recognize which paw is which, and the problem reduces to a supervised classification (With some additional wrinkles... Image recognition is a bit harder than a “normal” supervised classification problem).

His solution uses image recognition to match an impact with one of the four paws. But since I know absolutely nothing about image recognition, I had a hard time understanding how the solution worked. Don’t know what I’m talking about? Here’s the part of the code that failed to make sense to me:

def classify(self, paw):
    """Classifies a (standardized) pawprint based on how close its eigenpaw
    score is to known eigenpaw scores of the paws for each leg. Returns a code
    of "LF", "LH", "RF", or "RH" for the left front, left hind, etc paws."""
    # Subtract the average paw (known a-priori from the training dataset)
    paw -= self.average_paw
    # Project the paw into eigenpaw-space
    scores = np.dot(paw, self.basis_vecs) 
    # "whiten" the score so that all dimensions are equally important
    scores /= self.basis_stds
    # Select which template paw is the closest to the given paw...
    diff = self.template_paws - scores
    diff *= diff
    diff = np.sqrt(diff.sum(axis=1))
    # Return the ascii code for the closest template paw.
    return PAW_CODE_LUT[diff.argmin()]

The first part is easy: paw -= self.average_paw is done to extract everything all paws have in common and thus don’t really help discriminating them.

But then we get to the eigenvector part: scores = paw.dot(self.basis_vecs) Oblivious about what eigen vectors really are (I’ve heard about them, but don’t understand what they do), I turned to Wikipedia for an answer:

A square matrix represents a linear transformation of the vectors in a vector space.

The mathematical expression of this idea is as follows. If A is a linear transformation, a non-zero vector v is an eigenvector of A if there is a scalar λ such that

Eigenvector image

The scalar λ is said to be the eigenvalue of A corresponding to v.

An eigenspace of A is the set of all eigenvectors with the same eigenvalue. By convention, the zero vector, while not an eigenvector, is also a member of every eigenspace. For a fixed λ

Eigenspace image

is an eigenspace of A.

I came for an explanation of what they meant, you know in lay man terms, not to be smacked to death with Mathematical jargon… How does a square matrix represent a linear transformation of the vectors in a vector space? What vectors in what space? Talk English will you! Then they go on about vector v which is an eigenvector if there’s a scalar λ such that Av = λv. Right, what the heck is a scalar and why would that let me substitute A with λ?

Funny enough, I had University level Mathematics for Human Movement Science (or Kinesiology in other parts of the world). However, we only needed the Math for courses like 3D Kinematics, Biomechanics and Data Processing. So the theory part is mostly obfuscated behind walls of more useful calculations that just let me get the job done. But obviously, this doesn’t really help me when I want to learn something new, which actually does require me to understand the theory behind it! Sadly most resources behave like Wikipedia and assume you can actually read formulas and don’t try to explain what they do in normal English.

Naturally I turned to Google and if you search long enough, you’ll probably find what you’re looking for: an entry-level tutorial explaining how Face Recognition works! For anyone like me who doesn’t know too much about Face/Image recognition, this is definitely a recommended reading, if only because the author takes the time to take the reader by the hand to guide him through the theory. Another notable mention goes out to Omid Sakhi and his site facedetection.com, which has a small newsletter-ish course into Face Detection where you get an email every day explaining the theory in bite-size pieces.

Even though I still don’t completely understand what an eigen vector itself is or how it’s being calculated, I have a much better understanding of the whole process in general. I spent a good two days only browsing around for more information and I must admit it got me totally hooked! Even so much that I didn’t even try to get the code working! Which unfortunately turned out to give me an error message. This is especially unfortunate, because it becomes harder to test what all the intermediate results look like. After all, an image is worth a thousand words isn’t it?

Luckily Joe’s answer is very comprehensive, so I can explain how it works by his words and examples. His first step was to check for the trapezoidal pattern I described in my previous post:

Different patterns

If the impacts follow this pattern, they would automatically get assigned to the paws and from these impacts a training set is assembled. This training set will be used to create an image of what an ‘average’ front left, etc, paw looks like.

To be able to create an average image, we first need to scale each impact to the same size and standardize the values. The end result is pretty consistent:

Average paws for each leg Average paw for all legs

Before we go any further, we subtract this average paw from each of the separate paws. Now if we look at how each paw differs from the mean, the different paws are clearly distinguishable.

Difference between each paw and the average

Each impact (20×20 pixels) can be described as a 400-dimensional vector (400×1) and compared to these four. But this doesn’t work consistently, therefore we build a set of ‘eigenpaws’ (which is Joe’s version of eigenfaces) to describe an impact as a combination of eigenpaws.

This is identical to principal components analysis, and basically provides a way to reduce the dimensionality of our data, so that distance is a good measure of shape.

You make them by using this function:

def make_eigenpaws(paw_data):
    """Creates a set of eigenpaws based on paw_data.
    paw_data is a numdata by numdimensions matrix of all of the observations."""
    average_paw = paw_data.mean(axis=0)
    paw_data -= average_paw

    # Determine the eigenvectors of the covariance matrix of the data
    cov = np.cov(paw_data.T)
    eigvals, eigvecs = np.linalg.eig(cov)

    # Sort the eigenvectors by ascending eigenvalue (largest is last)
    eig_idx = np.argsort(eigvals)
    sorted_eigvecs = eigvecs[:,eig_idx]
    sorted_eigvals = eigvals[:,eig_idx]

    # Now choose a cutoff number of eigenvectors to use 
    # (50 seems to work well, but it's arbirtrary...
    num_basis_vecs = 50
    basis_vecs = sorted_eigvecs[:,-num_basis_vecs:]

    return basis_vecs

When this is applied to all the data and sort them for size, you get50 eigenpaws (the left image shows the 9 largest). If you would imagine each eigenpaw as a coordinate in a ‘paw space’ and for each paw we can create a cluster in that paw space. So next time we see a new impact, we project it into this paw space and assign it to appropriate cluster of eigenpaws.

Largest eigenpaws

Face space

This projecting is done by dotting the 400×1 vector with the 50 basis vectors, that way we only have to compare this new 50×1 vector with those template paws from our training set. To classify an impact, we simply use the distance between the vectors with the following code:

# Load template data: codebook, average_paw, basis_stds and basis_vecs
paw_code = {0:'LF', 1:'RH', 2:'RF', 3:'LH'}
def classify(paw):
    paw = paw.flatten()
    paw -= average_paw
    scores = paw.dot(basis_vecs) / basis_stds
    diff = codebook - scores
    diff *= diff
    diff = np.sqrt(diff.sum(axis=1))
    return paw_code[diff.argmin()]

The images’ code is then the one with the smallest difference between the impacts vector and the template vectors of each paw. Et voila:

Labeling with clear pattern

Messy labeling with running trial

Off course, this isn’t perfect. It doesn’t work so well for the small dogs, because they lack a clear impact and the toes aren’t as nicely separated as with larger dogs. But that’s part of the study: to figure out the limits of the pressure plate, so the clinic knows what they can or can’t measure with it!

Plus Joe and I have been discussing some tweaks, so that it also works in other conditions. Perhaps we need separate training sets for the running and walking trials, based on the walking pattern of the dog or their relative sizes. But to be of much use in these discussions, I clearly have a lot to learn about image recognition!

0 Comments

What do I do while I haven’t sorted my paws?

Mon, 28 Jan 2013

So I asked a question on Stack Overflow on "How to sort my paws?” to ask for help of users with knowledge about machine learning and image recognition.

Sadly, I stalled asking my question for too long and ended up asking the question just before Christmas… Which traditionally causes a major fall in SO-activity, so I guess I’ll need to add a bounty to get some fresh attention to my question.

As one of the answers asked, I do have annotations of every first contact that hits the plate:

firstpaw = {‘svl_1′:’left’, ‘svl_3′:’right’, ‘svl_2′:’right’, ‘svr_3′:’left’, 
  ‘ser_3′:’right’, ‘svr_1′:’right’, ‘ser_1′:’right’, ‘dvl_2′:’left’, ‘der_3′:’right’, 
  ‘dvl_1′:’left’, ‘dvl_3′:’left’, ‘sel_1′:’right’, ‘sel_2′:’left’, ‘sel_3′:’right’, 
  ‘del_3′:’left’, ‘del_2′:’left’, ‘del_1′:’right’, ‘der_2′:’left’, ‘der_1′:’right’, 
  ‘ser_2′:’left’, ‘svr_2′:’left’, ‘dvr_1′:’right’, ‘dvr_2′:’right’, ‘dvr_3′:’right’}

However, these annotations matched with Joe’s ‘sorting’ don’t work. Why? Because the method assumes it will encounter all four paws consecutively.

What happens in some measurements is that one front paw (LF) strikes the plate, just barely, while the other (RF) lands just before the plate. Now the next two paws that land are, RH and LH, the fourth paw that lands is… Left front again! From there on, the sorting becomes completely useless, because it started off on the wrong track.

Visually, we easily spotted that this first paw doesn’t match the four paw pattern we see over and over again:

Pattern in paw positions != Different patterns in paw positions

So clearly, there should be a way to recognize that the first LF isn’t part of the pattern.

Since I’m not an image recognition guru, all I can resort to is trying to use some of the rules I mentioned in my previous post. However, the first step to getting some statistics from the results is off course by manually sorting all the paws.

For anyone following what I’m doing and mildly curious into looking at the data for himself: here are the lists with the annotations for the 6 measurements I processed (sorted for their order of occurance, the third dimension of each slice).

sel_1 = ['lf','rf','lh','lf','rh','rf','lh','lf','rh','rf','lh']  
sel_2 = ['lf','rh','rf','lh','lf','rh','rf','lf','lh','rh','lh']  
sel_3 = ['lf','rf','lh','lf','rh','rf','lh','lf','rh','rf','lh']  
ser_1 = ['rf','lh','lf','rh','rf','lh','lf','rh','rf','lh','lf','rh']  
ser_2 = ['lf','rh','rf','lh','lg','rh','rf','lh','lf','rh']  
ser_3 = ['rf','lh','lf','rh','rf','lh','lf','rh','rf','lh','lf'] 

Before I can do any number crunching, I have to delete all the incomplete impacts. I consider an impact incomplete if the sum of the pressure of the entire array isn’t zero by the time the measurement has ended (measurements have a fixed 2 second duration, so sometimes impacts get cut off) or when it landed near on the edge of the plate, in which case I can’t say for certain if the contact is complete (any impact with slice coordinates matching the edge will be deleted).

Here’s an example which has both!

Incomplete paw

After I’ve thrown out all the crap, I can sort them into the four paw groups and start doing some calculations.

I’ll post more once I’ve sorted everything out and I’ll put the data online too.

0 Comments

Going from beginner to … to what actually?

Mon, 28 Jan 2013

When I started this project, I started off hacking my way through Python as if it were a Matlab-clone, using Python tutorials and online resources. Soon enough I realized that wasn’t getting me anywhere, because even opening a file and putting its contents in an array was too much to ask.

After that, I decided to start reading Learning Python to get a better grasp of the basics. After finishing it, I was capable of most of the basic tasks, I could use code samples provided by Stack Overflow users and thought I had an idea of what I should be doing. However, every ‘beginner’ book ends with Object Oriented programming and Classes. They end here with a good reason: using these tools clearly distinguishes you from a beginner and take you into the realm of journeyman programmers on their way to become masters. Sadly, it becomes a lot harder to find good books that help you get beyond the initial beginner status.

Which became painfully obvious when I wanted to apply my paw detection to all measurements of a dog. All of a sudden I had to make my script apply itself to multiple files and maintain them in a sensible way. This takes you into the realm of classes, but if you ever saw an entry level example of a class:

# We can create a class that support that:
class BalanceError(Exception):
    value = "Sorry you only have $%6.2f in your account"

class BankAccount:
    def __init__(self, initialAmount):
        self.balance = initialAmount
        print "Account created with balance %5.2f" % self.balance

    def deposit(self, amount):
        self.balance = self.balance + amount

    def withdraw(self, amount):
        if self.balance >= amount:
            self.balance = self.balance - amount

you surely understand what a daunting task it is to design one yourself the first time. So I decided to ask another SO-question to help address this problem. Even though S.Lott’s answer was very helpful, I guess I was aiming a bit too high as it didn’t really help me understand how to get the code working the way I intended.

Luckily after some messing around I did manage to get it working, but I ran into my next problem. I had divided the code into three classes: - Dogs, which would require the folder path and name of the dog; then it lists all the files that were present for this dog and creates a list of all the file paths, so they are easy to load. This class is also intended to apply Measurements to each file and stuffs the results into a database. - Measurements, which loads the file that’s passed to it and returns a slice of the array (basically coordinates) and the data from this slice (an array) for each paw. - Paws, which would detect the toes within a paw and return their coordinates and values.

I hadn’t done any coding on a paw level yet, as I haven’t sorted the paws yet. And that’s what’s bothering me: each slice of data needs to know to which paw it belongs. This sorting should be in Measurements, as the measurement knows where a slice of data is relative to the others.

However, I have a measurement log that tells me for each measurement, which paw touched the plate first. I figured that if I could take all the paws, throw them on a heap, cluster them into 4 groups and then count how many of my first paws where in each group. If there is enough similarity between each print of the same paw, but enough difference between the 4 different paws, this should create four groups and let the log tell me what the front paws are. Then I only have to sort the hind paws, which shouldn’t be too much of a problem.

BUT! If I want to group up all the measurements, that’s not something done within Measurements. Only Dogs knows there are even multiple files to begin with! So something tells me I don’t really know what I’m doing Thankfully, I got a couple additional books and I’m planning to try and work my way through them in the coming weeks. For now, I’ll focus on getting the sorting sorted out!

Currently my code returns a dictionary with the sliced out array and the slice itself. Let’s call them contact, because the array describes the contact of a paw with the plate and coordinates, because the slice is basically the X and Y coordinates over time (Z). I’ll put these in my Dropbox, so that I can share it with anyone interested in helping out.

While sorting the paws could be done with clustering, it’s perhaps much easier to keep it on a measurement level. Especially because each measurement should contain enough information to sort them already:

Entire plate with temporal information

As you can see, there’s a clear, repeatable pattern to it. So perhaps a better approach would be to apply several heuristics and that decide which paw it is.

You see, the problem is that while it’s not so difficult to sort the paws for healthy dogs, this won’t necessarily be true for lame dogs. (Note: this project is for a veterinary clinic!) So I can’t rely on just one algorithm as it wouldn’t work for all the dogs that would be measured with it.

However, using heuristics should at least be a bit more robust. Some of the rules I’m thinking off are:

I always know which the first paw is, thanks to the log. Due to the tracking of the paw detection, this should hold true for other contacts of this paw. The front and hind paw are connected both temporally and spatially, when the front paw is lifted the hind paw should make contact close in both time and space. If not, the dog would simply fall over! A pressure measurement is like a fingerprint. Each contact of a paw looks alike, so if you can identify one paw correctly, this will be true for the following contacts too. How do they look alike?

  • The duration of the contact will be very similar and is likely to be different between front and hind paws.
  • The pressure distribution, basically the location of the peak pressures will be unique for each paw.
  • The path of the center of pressure can be used for clustering contacts. Furthermore, the sideways motion indicates whether it’s left or right. The patterns are also different between the front and hind paws, because of the anatomy of the legs and their function.
  • The pattern of the total pressure under the paw over time, based on the assumption that the ratio of weight bearing between the front and hind paws is 60-40%.
  • The pattern with which the toes come in to contact with the ground and leave it again is another way of distinguishing between left and right and front and hind paws.
  • The contact surface of the paws will be different, especially because the hind paws tend to be somewhat smaller.

So here we have a set of rules that need to be quantified, compared to a data set that has been manually sorted and then figure out how to sort other measurements as well.

The first thing I’m going to do now, is try and quantify these rules and I will update my post later on with the results. Once I have those, I’ll be able to write a Stack Overflow question for additional help and useful built-ins I can use.

0 Comments

Improving the paw detection

Mon, 28 Jan 2013

As I have explained previously, there were some problems with my paw detection. As SO-user points out:

Seems like you would have to turn away from the row/column algorithm as you’re limiting useful information.

Well, I realized I was limiting my options by purely looking at rows and columns. But when you look at the first data I loaded:

Correct paw detection

You might understand why that approach worked perfectly well. All paws are spatially and temporally separated, so there’s no need for a more complicated approach. However, when I started loading up other measurements, it became clear this wasn’t going to cut it:

Detection going wrong with large paws Detection going wrong with small paws

Which made me turn to Stack Overflow again to get help with recognizing each separate paws. Thanks to Joe Kington I had a good working answer within 12 hours and he even added some cool animated GIFs to show off the results:

GIF of the entire plate roll off

As you can see it will draw a rectangle around each area where the sensor values are above a certain threshold (set to a ridiculously low 0.0001 does a great job). The data first get’s smoothed though, to make sure there’s less dead areas within each paw. Then it starts filling up the contact completely, since it will label the filled sensors and cut those out and return the slice that contains all the connected filled sensors.

def find_paws(data, smooth_radius=5, threshold=0.0001):
     data = sp.ndimage.uniform_filter(data, smooth_radius)
     thresh = data > threshold
     filled = sp.ndimage.morphology.binary_fill_holes(thresh)
     coded_paws, num_paws = sp.ndimage.label(filled)
     data_slices = sp.ndimage.find_objects(coded_paws)
     return object_slices

He also creates four rectangles that will move to the position of each paw, which helps tremendously with deciding which paw corresponds with which contact, as we’ll only have to decide what the first rectangle is and start sorting from there.

Sadly, I haven’t had time to try this out on various of the nasty measurements (i.e. where the paws overlap very closely and the smoothing might make them interconnect). I’m also curious about the performance of the solution, as I’m not sure how it knows to stop looking when there’s no data anyway (like when the dog walked over the plate within 50% of the frames). Anway, I’ll update this post as soon as I have more results of my own!

0 Comments