67 lines
3.9 KiB
Plaintext
67 lines
3.9 KiB
Plaintext
Question 1
|
|
----------
|
|
|
|
In an MT architecture, the multiple threads share a single address space and use atomic operations and mutexes to access shared memory. However, in order for the MT model to work efficiently, the Operating System must support some sort of kernel threads to allow context switching between threads when blocking for I/O. If the OS does not support kernel threads, then the server's performance will be negatively affected.
|
|
|
|
The AMPED architecture brings together the performance characteristics of a SPED architecture while avoid the problems associated with a lack of asynchronous disk I/O provided by the Operating System. The implementation of the model involves multiple helper processes to perform blocking disk I/O. These helper processes can be separate processes communicating through IPC or kernel threads, the implementation simply requires that it allows parallel completion of tasks during blocking I/O calls.
|
|
|
|
In terms of disk I/O, neither MT nor AMPED cause a halt in server operations due to disk I/O, assuming kernel thread support in the case of the MT server. Both the MT and AMPED models can make 1 disk request per thread or helper, respectively.
|
|
|
|
Memory usage is higher on an MT server, as each thread takes up some amount of memory and kernel resources. In the AMPED model, the helper functions take up their own memory (be it due to multiple process or threads), but they are required on a per disk operation basis not on a per connection basis as in the MT model.
|
|
|
|
The AMPED model has easier information gathering about requests to increase performance or for accounting than the MT model. It also avoids the need for synchronization on cache wheras the MT model can use a single cache but requires synchronization. Long lived connections cost very little to an AMPED server, simply a file descriptor. The MT model loses an entire thread to some guy who refuses to upgrade to broadband or lives in the middle of nowhere. However, the commication on the AMPED model requires the use of IPC (if using multiprocess helpers), which can become limiting.
|
|
|
|
Question 2
|
|
----------
|
|
|
|
a) Consider the following sequence of operations:
|
|
P1: W(x)1 W(x)3
|
|
P2: W(x)2
|
|
P3: R(x)3 R(x)2
|
|
P4: R(x)2 R(x)3
|
|
|
|
The execution is causally consistent. The causal consistency (according to the slides) comes from a "write [that] comes after a read that returned the value of the other write". Writes that are concurrently executed can be seen in different orders on different machines.
|
|
|
|
Change to make it non causally consistent:
|
|
P1: W(x)1
|
|
P2: R(x)0 W(x)2
|
|
P3:
|
|
P4:
|
|
|
|
b) Consider the following sequence of operations:
|
|
P1: W(x)1 W(x)3
|
|
P2: R(x)1 R(x)1 R(x)3
|
|
Is this execution strictly consistent? Add or modify an event to change the answer.
|
|
|
|
No way, Jose (haha, I made a funny). This is totally not strictly consistent. It's not strict at all. Pretty Lazy, I'd say.
|
|
|
|
P1: W(x)1 W(x)3
|
|
P2: R(x)1 R(x)3 R(x)3
|
|
|
|
Now that's strict.
|
|
|
|
c) Consider the following sequence of operations:
|
|
P1: W(x)1 W(x)3 S
|
|
P2: R(x)3 S R(x)1
|
|
Is this execution weakly consistent? Add or modify an event to change the answer.
|
|
|
|
Heaven's no. Totally not weakly consistent.
|
|
|
|
P1: W(x)1 W(x)3 S
|
|
P2: R(x)3 S R(x)3
|
|
|
|
Now it's weakly consistent.
|
|
|
|
d) Consider the following sequence of operations:
|
|
P1: L W(x)1 W(x)3 U
|
|
P2: L R(x)3 U R(x)1
|
|
Is this execution release consistent? Add or modify an event to change the answer.
|
|
|
|
No, though it is POSSIBLE that it is, it'd be totally weird that the Read that was just 3 on P2 is suddenly 1 after the unlock, but hey implementations are weird.
|
|
|
|
P1: L W(x)1 W(x)3 U
|
|
P2: L R(x)3 U R(x)3
|
|
|
|
That is mos def release consistent.
|
|
|
|
|