first commit

This commit is contained in:
Jose Caban
2025-06-07 01:59:34 -04:00
commit 388ac241f0
3558 changed files with 9116289 additions and 0 deletions

View File

@@ -0,0 +1,116 @@
Homework 1
Due January 30
==============
Vladimir Urazov
gtg308i
1. Assuming we are writing this in C, and that the function
CurrentThread() returns a Thread*, such that the address in the
pointer can be used to uniquely identify a thread, we could do the
following:
typedef struct {
int lock_count; /* The number of times this mutex has been locked by
the owner thread. */
Thread* owner_thread; /* The thread that locked this mutex. */
Mutex lock_count_mutex; /* Regular mutex to ensure we don't run into
problems when two different threads try
to lock an unlocked recmutex
simultaneously. */
Condition mutex_unlocked; /* Will be signalled any time lock count
is zero. */
} RecMutex;
void rec_mutex_init(RecMutex* pm) {
pm->lock_count_mutex = MUTEX_INITIALIZER; // whatever is appropriate
pm->mutex_unlocked = COND_INITIALIZER; // for the thread library
// we are using.
pm->owner_thread = NULL;
pm->lock_count = 0;
}
void rec_mutex_lock(RecMutex* pm) {
/* See if the rec mutex is locked by current thread: */
if (CurrentThread == pm->owner_thread) {
Lock(pm->lock_count_mutex) {
pm->lock_count ++;
}
} else {
Lock(pm->lock_count_mutex) {
/* Wait for everybody to unlock the rec mutex: */
while (pm->lock_count > 0) {
Wait(pm->lock_count_mutex, pm->mutex_unlocked);
}
/* Now that the previous owner has unlocked the rec mutex, can lock it */
pm->owner_thread = CurrentThread();
pm->lock_count = 1;
}
}
}
void rec_mutex_unlock(RecMutex* pm) {
Lock(pm->lock_count_mutex) {
pm->lock_count --;
/* If this is the last unlock, signal a waiting thread that it can
lock the recursive mutex now: */
if (pm->lock_count < 1) {
pm->owner_thread = NULL;
Signal(pm->mutex_unlocked);
}
}
}
2. Performance impact of switching two threads on Solaris:
a. Two user-level threads on the same LWP. This switch would be
very efficient. In this case, the kernel does not know anything
about the switch, since the switch is abstracted away from the
kernel through the LWP. In this case, no kernel-level context
switch is necessary, and the thread switch is done purely by the
user-level thread library.
b. Two user-level threads on different LWPs. In this switch, since
we are changing from one LWP to another, we would have to have
the kernel switch the underlying kernel threads as well. This
would entail switching thread-specific information, such as the
contents of the registers, and the stack, and the memory
information. However, since the two LWPs run for the same
process, process-specific information does not need to be
updated during the switch, so that makes this switch more
efficient than a full context switch.
c. Two user-level threads in different processes. This kind of
switch is slower still, because in addition to all the changes
outlined above, the kernel would also have to switch
process-specific information, such as the address space.
d. Two kernel threads unattached to LWPs. This operation is faster
than switching between two LWPs, because here, we only need to
change the kernel-thread specific information. Kernel threads
have only a small data structure and the associated stack, so
switching between them is relatively fast.
3. Assuming we can do C++ style local variable declarations, this can
be done with the following macro:
#define LOCK(mutex) for (int index##__LINE__ = 1, pthread_mutex_lock(mutex); index##__LINE__; index##__LINE__ = 0, pthread_mutex_unlock(mutex))
The macro can then be used as follows:
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
LOCK(&mutex) {
/* ... do stuff ... */
}
As long as we don't have some variable named indexNNN and try to
access it from within the LOCK block, where NNN happens to be the
line number where the LOCK macro invocation is located, the macro
works fine.