This question considers the issue of multi sensor integration.
Is it possible to restrain the application of data [ e.g. by Kinnect ] to an axis in Blender?
The thought is that a sensor with data from a machine on X-axis in a room can be used to animate a rig in Blender on its x-axis. [ As a constraint ] Then, transferring another data set from a camera mounted 90 degrees to the first [ aka Y-Axis ] could be used to enhance the rig animation in Y-Axis within Blender - if the action can be limited to that axis - without negating the already established X-axis rig and animation. If doable, this could be extended into negative axes and the Z to work around the multi-sensor limitations.
Has anyone considered this?