Numpy vs Matlab reshape

Being a Matlab long-term user, I have almost switched to Python completely. But I wasn’t paying attention of different reshape behavior of numpy vs Matlab. It wasted me a night to catch a nasty bug because of that. I always assumed that when I “vectorize” a matrix, it will expand along the row index first as in Matlab. It turns out that numpy’s default behavior is to expand the column index first. This is known as the ‘C’ order vs the Fortran order in Matlab. To override the default behavior, simply set order to ‘F’ in the argument. For example,

z = x.reshape(3,4,order='F')

Underactuated robotics

Watched the first lecture of underactuated robotics by Prof Tedrake. It was great. His lecture note/book is available online. And the example code is directly available at colab.

So what is underactuated robotics? Consider a standard manipulator equation with state $latex q$

$latex M(q) \dot{q}+C(q,\dot{q}) \dot{q} = \tau_g(q) + B(q) u,$

where L.H.S. are the force terms, R.H.S. are the “Ma” terms, $M(q)$ is mass/inertia matrix and positive definite, $latex u$ is the control input, and $latex B(q)$ maps the control input to $latex q$.

We can rearrange the above to

$latex \ddot{q}= M(q)^{-1} [ \tau_g(q) + B(q) u – C(q,\dot{q} )\dot{q}] =\underset{f_1(q,\dot{q})}{\underbrace{M(q)^{-1}[ \tau_g(q)  – C(q,\dot{q} )\dot{q}]}} +\underset{f_2(q,\dot{q})}{\underbrace{M(q)^{-1} B(q) }}u .$

Note that if $latex f_2(q,\dot{q})$ has full row rank (or simply $latex B(q)$ has full row rank since $latex M(q)$ is positive definite and hence full-rank), then for any desired $latex \ddot{q}^d$, we can achieve that by picking $latex u$ as

$latex u = f_2^{\dagger} (q,\dot{q}) (\ddot{q}^{d} – f_1(q,\dot{q})),$ where $latex f_2^{\dagger}$ is the pseudo-inverse of $latex f_2$. We say such robotic system is fully actuated.

On the other hand, if $latex f_2(q,\dot{q})$ does not have full row rank, the above trivial controller will not work. We then have a much more challenging and interesting scenario. And we say the robotic system is underactuated.

 

 

Linformer

video and paper.

Remarks:

  • Project embedding to lower dimension to save computational complexity and space
  • Some gain in speed but doesn’t look too significant. Tradeoff in performance seems larger than claimed
  • Theorem 1 based on JL-lemma did not used properties of attention itself. It seems that the same argument can be used to anywhere (besides attention). The theorem itself seems to be a bit a stretch
  • With the same goal of speeding up transformer, the “kernelized transformer” appears to be a better work

SIREN

A comeback of signal processing? A video from authors and from Yannic, and the arxiv paper, and a nice website. The 3d reconstruction is very convincing. btw, another video presentation of implicit representation.

Some remarks:

  1. Represent signals as functions instead multidimensional data (not a completely new idea as the authors pointed out)
  2. Match not just the signal itself but also its derivative
  3. Use sinsoidal activation function. This allows derivative of the network is still well-defined (and still a siren).

 

Copyright OU-Tulsa Lab of Image and Information Processing 2024
Tech Nerd theme designed by Siteturner