On SMP, there is an inherent race when swapping: the old thread adds itself back to the run queue before calling into the arch layer to do the context switch. The former is properly synchronized under the scheduler lock, and the later operates with interrupts locally disabled. But until somewhere in the middle of arch_switch(), the old thread (that is in the run queue!) does not have complete saved state that can be restored. So it's possible for another CPU to grab a thread before it is saved and try to restore its unsaved register contents (which are garbage -- typically whatever state it had at the last interrupt). Fix this by leveraging the "swapped_from" pointer already passed to arch_switch() as a synchronization primitive. When the switch implementation writes the new handle value, we know the switch is complete. Then we can wait for that in z_swap() and at interrupt exit. Signed-off-by: Andy Ross <andrew.j.ross@intel.com> |
||
|---|---|---|
| .. | ||
| gen_offset.h | ||
| kernel_arch_interface.h | ||
| kernel_internal.h | ||
| kernel_offsets.h | ||
| ksched.h | ||
| kswap.h | ||
| offsets_short.h | ||