Compare commits

..

No commits in common. "riscv" and "original" have entirely different histories.

123 changed files with 3439 additions and 6088 deletions

5
.gitignore vendored
View File

@ -1,20 +1,17 @@
*~
_*
*.o
*.a
*.d
*.asm
*.sym
*.img
*.gch
vectors.S
bootblock
entryother
initcode
initcode.out
kernelmemfs
mkfs/mkfs
mkfs
kernel/kernel
user/usys.S
.gdbinit
target/

View File

@ -1,165 +0,0 @@
GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates
the terms and conditions of version 3 of the GNU General Public
License, supplemented by the additional permissions listed below.
0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser
General Public License, and the "GNU GPL" refers to version 3 of the GNU
General Public License.
"The Library" refers to a covered work governed by this License,
other than an Application or a Combined Work as defined below.
An "Application" is any work that makes use of an interface provided
by the Library, but which is not otherwise based on the Library.
Defining a subclass of a class defined by the Library is deemed a mode
of using an interface provided by the Library.
A "Combined Work" is a work produced by combining or linking an
Application with the Library. The particular version of the Library
with which the Combined Work was made is also called the "Linked
Version".
The "Minimal Corresponding Source" for a Combined Work means the
Corresponding Source for the Combined Work, excluding any source code
for portions of the Combined Work that, considered in isolation, are
based on the Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the
object code and/or source code for the Application, including any data
and utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License
without being bound by section 3 of the GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a
facility refers to a function or data to be supplied by an Application
that uses the facility (other than as an argument passed when the
facility is invoked), then you may convey a copy of the modified
version:
a) under this License, provided that you make a good faith effort to
ensure that, in the event an Application does not supply the
function or data, the facility still operates, and performs
whatever part of its purpose remains meaningful, or
b) under the GNU GPL, with none of the additional permissions of
this License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from
a header file that is part of the Library. You may convey such object
code under terms of your choice, provided that, if the incorporated
material is not limited to numerical parameters, data structure
layouts and accessors, or small macros, inline functions and templates
(ten or fewer lines in length), you do both of the following:
a) Give prominent notice with each copy of the object code that the
Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the object code with a copy of the GNU GPL and this license
document.
4. Combined Works.
You may convey a Combined Work under terms of your choice that,
taken together, effectively do not restrict modification of the
portions of the Library contained in the Combined Work and reverse
engineering for debugging such modifications, if you also do each of
the following:
a) Give prominent notice with each copy of the Combined Work that
the Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this license
document.
c) For a Combined Work that displays copyright notices during
execution, include the copyright notice for the Library among
these notices, as well as a reference directing the user to the
copies of the GNU GPL and this license document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this
License, and the Corresponding Application Code in a form
suitable for, and under terms that permit, the user to
recombine or relink the Application with a modified version of
the Linked Version to produce a modified Combined Work, in the
manner specified by section 6 of the GNU GPL for conveying
Corresponding Source.
1) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (a) uses at run time
a copy of the Library already present on the user's computer
system, and (b) will operate properly with a modified version
of the Library that is interface-compatible with the Linked
Version.
e) Provide Installation Information, but only if you would otherwise
be required to provide such information under section 6 of the
GNU GPL, and only to the extent that such information is
necessary to install and execute a modified version of the
Combined Work produced by recombining or relinking the
Application with a modified version of the Linked Version. (If
you use option 4d0, the Installation Information must accompany
the Minimal Corresponding Source and Corresponding Application
Code. If you use option 4d1, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL
for conveying Corresponding Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the
Library side by side in a single library together with other library
facilities that are not Applications and are not covered by this
License, and convey such a combined library under terms of your
choice, if you do both of the following:
a) Accompany the combined library with a copy of the same work based
on the Library, uncombined with any other library facilities,
conveyed under the terms of this License.
b) Give prominent notice with the combined library that part of it
is a work based on the Library, and explaining where to find the
accompanying uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions
of the GNU Lesser General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Library as you received it specifies that a certain numbered version
of the GNU Lesser General Public License "or any later version"
applies to it, you have the option of following the terms and
conditions either of that published version or of any later version
published by the Free Software Foundation. If the Library as you
received it does not specify a version number of the GNU Lesser
General Public License, you may choose any version of the GNU Lesser
General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide
whether future versions of the GNU Lesser General Public License shall
apply, that proxy's public statement of acceptance of any version is
permanent authorization for you to choose that version for the
Library.

127
Makefile
View File

@ -1,7 +1,34 @@
K=kernel
M=mkfs
U=user
P=programs
OBJS = \
$K/entry.o \
$K/start.o \
$K/console.o \
$K/printf.o \
$K/uart.o \
$K/kalloc.o \
$K/spinlock.o \
$K/string.o \
$K/main.o \
$K/vm.o \
$K/proc.o \
$K/swtch.o \
$K/trampoline.o \
$K/trap.o \
$K/syscall.o \
$K/sysproc.o \
$K/bio.o \
$K/fs.o \
$K/log.o \
$K/sleeplock.o \
$K/file.o \
$K/pipe.o \
$K/exec.o \
$K/sysfile.o \
$K/kernelvec.o \
$K/plic.o \
$K/virtio_disk.o
# riscv64-unknown-elf- or riscv64-linux-gnu-
# perhaps in /opt/riscv/bin
@ -34,8 +61,6 @@ CFLAGS += -MD
CFLAGS += -mcmodel=medany
CFLAGS += -ffreestanding -fno-common -nostdlib -mno-relax
CFLAGS += -I.
CFLAGS += -march=rv64gc
CFLAGS += -mabi=lp64d
CFLAGS += $(shell $(CC) -fno-stack-protector -E -x c /dev/null >/dev/null 2>&1 && echo -fno-stack-protector)
# Disable PIE when possible (for Ubuntu 16.10 toolchain)
@ -48,35 +73,41 @@ endif
LDFLAGS = -z max-page-size=4096
.PHONY: build kernel ulib mkfs clean
$K/kernel: $(OBJS) $K/kernel.ld $U/initcode
$(LD) $(LDFLAGS) -T $K/kernel.ld -o $K/kernel $(OBJS)
$(OBJDUMP) -S $K/kernel > $K/kernel.asm
$(OBJDUMP) -t $K/kernel | sed '1,/SYMBOL TABLE/d; s/ .* / /; /^$$/d' > $K/kernel.sym
build: kernel fs.img
$U/initcode: $U/initcode.S
$(CC) $(CFLAGS) -march=rv64g -nostdinc -I. -Ikernel -c $U/initcode.S -o $U/initcode.o
$(LD) $(LDFLAGS) -N -e start -Ttext 0 -o $U/initcode.out $U/initcode.o
$(OBJCOPY) -S -O binary $U/initcode.out $U/initcode
$(OBJDUMP) -S $U/initcode.o > $U/initcode.asm
kernel:
$(MAKE) -C $K
tags: $(OBJS) _init
etags *.S *.c
ulib:
$(MAKE) -C $U
ULIB = $U/ulib.o $U/usys.o $U/printf.o $U/umalloc.o
mkfs:
$(MAKE) -C $M
_%: %.o $(ULIB)
$(LD) $(LDFLAGS) -T $U/user.ld -o $@ $^
$(OBJDUMP) -S $@ > $*.asm
$(OBJDUMP) -t $@ | sed '1,/SYMBOL TABLE/d; s/ .* / /; /^$$/d' > $*.sym
USERLIBS = $U/ulib.o $U/usys.o $U/printf.o $U/umalloc.o
$U/usys.S : $U/usys.pl
perl $U/usys.pl > $U/usys.S
%.o: %.c *.h
$(CC) $(CFLAGS) -c $<
$U/usys.o : $U/usys.S
$(CC) $(CFLAGS) -c -o $U/usys.o $U/usys.S
_%: %.o ulib
$(LD) $(LDFLAGS) -T $U/user.ld -o $@ $< $(USERLIBS)
# $(OBJDUMP) -S $@ > $*.asm
# $(OBJDUMP) -t $@ | sed '1,/SYMBOL TABLE/d; s/ .* / /; /^$$/d' > $*.sym
$U/_forktest: $U/forktest.o ulib
$U/_forktest: $U/forktest.o $(ULIB)
# forktest has less library code linked in - needs to be small
# in order to be able to max out the proc table.
$(LD) $(LDFLAGS) -N -e main -Ttext 0 -o $U/_forktest $U/forktest.o $U/ulib.o $U/usys.o
# $(OBJDUMP) -S $U/_forktest > $U/forktest.asm
$(OBJDUMP) -S $U/_forktest > $U/forktest.asm
mkfs/mkfs: mkfs/mkfs.c $K/fs.h $K/param.h
gcc -Werror -Wall -I. -o mkfs/mkfs mkfs/mkfs.c
# Prevent deletion of intermediate files, e.g. cat.o, after first build, so
# that disk image changes after first build are persistent until clean. More
@ -85,36 +116,34 @@ $U/_forktest: $U/forktest.o ulib
.PRECIOUS: %.o
UPROGS=\
$P/_cat\
$P/_echo\
$P/_forktest\
$P/_grep\
$P/_grind\
$P/_init\
$P/_kill\
$P/_ln\
$P/_ls\
$P/_mkdir\
$P/_rm\
$P/_sh\
$P/_stressfs\
$P/_usertests\
$P/_wc\
$P/_zombie\
$P/_shutdown\
$P/_clear\
$U/_cat\
$U/_echo\
$U/_forktest\
$U/_grep\
$U/_init\
$U/_kill\
$U/_ln\
$U/_ls\
$U/_mkdir\
$U/_rm\
$U/_sh\
$U/_stressfs\
$U/_usertests\
$U/_grind\
$U/_wc\
$U/_zombie\
fs.img: mkfs README.md $(UPROGS)
mkfs/mkfs fs.img README.md $(UPROGS)
fs.img: mkfs/mkfs README $(UPROGS)
mkfs/mkfs fs.img README $(UPROGS)
-include kernel/*.d user/*.d
clean:
$(MAKE) -C $K clean
$(MAKE) -C $M clean
$(MAKE) -C $U clean
rm -f *.tex *.dvi *.idx *.aux *.log *.ind *.ilg \
*/*.o */*.a */*.d */*.asm */*.sym fs.img .gdbinit \
*/*.o */*.d */*.asm */*.sym \
$U/initcode $U/initcode.out $K/kernel fs.img \
mkfs/mkfs .gdbinit \
$U/usys.S \
$(UPROGS)
# try to generate a unique GDB port
@ -127,18 +156,18 @@ ifndef CPUS
CPUS := 3
endif
QEMUOPTS = -machine virt -bios none -kernel kernel/kernel -m 128M -smp $(CPUS) -nographic
QEMUOPTS = -machine virt -bios none -kernel $K/kernel -m 128M -smp $(CPUS) -nographic
QEMUOPTS += -global virtio-mmio.force-legacy=false
QEMUOPTS += -drive file=fs.img,if=none,format=raw,id=x0
QEMUOPTS += -device virtio-blk-device,drive=x0,bus=virtio-mmio-bus.0
qemu: kernel fs.img
qemu: $K/kernel fs.img
$(QEMU) $(QEMUOPTS)
.gdbinit: .gdbinit.tmpl-riscv
sed "s/:1234/:$(GDBPORT)/" < $^ > $@
qemu-gdb: kernel .gdbinit fs.img
qemu-gdb: $K/kernel .gdbinit fs.img
@echo "*** Now run 'gdb' in another window." 1>&2
$(QEMU) $(QEMUOPTS) -S $(QEMUGDB)

49
README Normal file
View File

@ -0,0 +1,49 @@
xv6 is a re-implementation of Dennis Ritchie's and Ken Thompson's Unix
Version 6 (v6). xv6 loosely follows the structure and style of v6,
but is implemented for a modern RISC-V multiprocessor using ANSI C.
ACKNOWLEDGMENTS
xv6 is inspired by John Lions's Commentary on UNIX 6th Edition (Peer
to Peer Communications; ISBN: 1-57398-013-7; 1st edition (June 14,
2000)). See also https://pdos.csail.mit.edu/6.1810/, which provides
pointers to on-line resources for v6.
The following people have made contributions: Russ Cox (context switching,
locking), Cliff Frey (MP), Xiao Yu (MP), Nickolai Zeldovich, and Austin
Clements.
We are also grateful for the bug reports and patches contributed by
Takahiro Aoyagi, Silas Boyd-Wickizer, Anton Burtsev, carlclone, Ian
Chen, Dan Cross, Cody Cutler, Mike CAT, Tej Chajed, Asami Doi,
eyalz800, Nelson Elhage, Saar Ettinger, Alice Ferrazzi, Nathaniel
Filardo, flespark, Peter Froehlich, Yakir Goaron, Shivam Handa, Matt
Harvey, Bryan Henry, jaichenhengjie, Jim Huang, Matúš Jókay, John
Jolly, Alexander Kapshuk, Anders Kaseorg, kehao95, Wolfgang Keller,
Jungwoo Kim, Jonathan Kimmitt, Eddie Kohler, Vadim Kolontsov, Austin
Liew, l0stman, Pavan Maddamsetti, Imbar Marinescu, Yandong Mao, Matan
Shabtay, Hitoshi Mitake, Carmi Merimovich, Mark Morrissey, mtasm, Joel
Nider, Hayato Ohhashi, OptimisticSide, Harry Porter, Greg Price, Jude
Rich, segfault, Ayan Shafqat, Eldar Sehayek, Yongming Shen, Fumiya
Shigemitsu, Cam Tenny, tyfkda, Warren Toomey, Stephen Tu, Rafael Ubal,
Amane Uehara, Pablo Ventura, Xi Wang, WaheedHafez, Keiichi Watanabe,
Nicolas Wolovick, wxdao, Grant Wu, Jindong Zhang, Icenowy Zheng,
ZhUyU1997, and Zou Chang Wei.
The code in the files that constitute xv6 is
Copyright 2006-2022 Frans Kaashoek, Robert Morris, and Russ Cox.
ERROR REPORTS
Please send errors and suggestions to Frans Kaashoek and Robert Morris
(kaashoek,rtm@mit.edu). The main purpose of xv6 is as a teaching
operating system for MIT's 6.1810, so we are more interested in
simplifications and clarifications than new features.
BUILDING AND RUNNING XV6
You will need a RISC-V "newlib" tool chain from
https://github.com/riscv/riscv-gnu-toolchain, and qemu compiled for
riscv64-softmmu. Once they are installed, and in your shell
search path, you can run "make qemu".

View File

@ -1,91 +0,0 @@
# xv6-riscv
MIT's xv6-riscv operating system, now in Rust!
This is a passion project for me - I've always wanted to write an operating system.
I decided to port the xv6 operating system so that I could try porting a moderately
sized codebase to my favorite programming language, Rust.
> xv6 is a re-implementation of Dennis Ritchie's and Ken Thompson's Unix
> Version 6 (v6). xv6 loosely follows the structure and style of v6,
> but is implemented for a modern RISC-V multiprocessor using ANSI C.
To start the project, I made a basic Rust crate that compiled to a static library.
At link time, the linker includes the static library into the final binary to result
in a hybrid kernel. When the entire kernel is written in Rust, the link process should
be a lot simpler (just Rust and assembly). At that point, I can start refactoring the
kernel to use more of Rust's features that don't translate well across FFI boundaries.
## Features
- [x] Multi-core processing
- [x] Paging
- [x] Pre-emptive multitasking
- [x] File system
- [x] Process communication using pipes
- [ ] Entirely Rust kernel (no more C code)
- [ ] [Round-robin scheduling](https://en.wikipedia.org/wiki/Round-robin_scheduling)
- [ ] Rust ABI for syscalls (I'll probably use [stabby](https://crates.io/crates/stabby) for this)
- [ ] Networking
- [ ] Running on real hardware (likely a [Milk-V Duo](https://milkv.io/duo))
- [ ] Port Rust standard library
## Building and running
Build requirements:
- [A RISC-V C toolchain](https://github.com/riscv/riscv-gnu-toolchain)
- [QEMU](https://www.qemu.org/download/) (qemu-system-riscv64)
- [A nightly Rust toolchain](https://rustup.rs/)
The makefile is split into multiple levels to clearly separate scripts,
but most important commands can be run from the project root.
- `make kernel` builds the kernel.
- `make mkfs` builds `mkfs`, the tool to help create the file system image.
- `make fs.img` uses `mkfs` to build the file system image.
- `make qemu` builds the kernel and file system, and then runs it in QEMU.
- `make clean` removes built artifacts, including from Rust.
## Contributing
Pull requests will be ignored.
## Authors and acknowledgements
Rewrite:
- Garen Tyler \<<garentyler@garen.dev>>
Source:
> xv6 is inspired by John Lions's Commentary on UNIX 6th Edition (Peer
> to Peer Communications; ISBN: 1-57398-013-7; 1st edition (June 14,
> 2000)). See also https://pdos.csail.mit.edu/6.1810/, which provides
> pointers to on-line resources for v6.
>
> The following people have made contributions: Russ Cox (context switching,
> locking), Cliff Frey (MP), Xiao Yu (MP), Nickolai Zeldovich, and Austin
> Clements.
>
> We are also grateful for the bug reports and patches contributed by
> Takahiro Aoyagi, Silas Boyd-Wickizer, Anton Burtsev, carlclone, Ian
> Chen, Dan Cross, Cody Cutler, Mike CAT, Tej Chajed, Asami Doi,
> eyalz800, Nelson Elhage, Saar Ettinger, Alice Ferrazzi, Nathaniel
> Filardo, flespark, Peter Froehlich, Yakir Goaron, Shivam Handa, Matt
> Harvey, Bryan Henry, jaichenhengjie, Jim Huang, Matúš Jókay, John
> Jolly, Alexander Kapshuk, Anders Kaseorg, kehao95, Wolfgang Keller,
> Jungwoo Kim, Jonathan Kimmitt, Eddie Kohler, Vadim Kolontsov, Austin
> Liew, l0stman, Pavan Maddamsetti, Imbar Marinescu, Yandong Mao, Matan
> Shabtay, Hitoshi Mitake, Carmi Merimovich, Mark Morrissey, mtasm, Joel
> Nider, Hayato Ohhashi, OptimisticSide, Harry Porter, Greg Price, Jude
> Rich, segfault, Ayan Shafqat, Eldar Sehayek, Yongming Shen, Fumiya
> Shigemitsu, Cam Tenny, tyfkda, Warren Toomey, Stephen Tu, Rafael Ubal,
> Amane Uehara, Pablo Ventura, Xi Wang, WaheedHafez, Keiichi Watanabe,
> Nicolas Wolovick, wxdao, Grant Wu, Jindong Zhang, Icenowy Zheng,
> ZhUyU1997, and Zou Chang Wei.
## License
All code written by me in this project is [LGPLv3](https://choosealicense.com/licenses/lgpl-3.0/) licensed.
Any existing code appears to be under the [MIT](https://choosealicense.com/licenses/mit/) license.

View File

@ -1,84 +0,0 @@
R=rustkernel
KERNEL_SOURCES = \
entry.c \
swtch.c \
trampoline.c \
bio.c \
fs.c \
log.c \
exec.c \
sysfile.c \
kernelvec.c \
virtio_disk.c
OBJS = $(KERNEL_SOURCES:%.c=%.o)
# riscv64-unknown-elf- or riscv64-linux-gnu-
# perhaps in /opt/riscv/bin
#TOOLPREFIX =
# Try to infer the correct TOOLPREFIX if not set
ifndef TOOLPREFIX
TOOLPREFIX := $(shell if riscv64-unknown-elf-objdump -i 2>&1 | grep 'elf64-big' >/dev/null 2>&1; \
then echo 'riscv64-unknown-elf-'; \
elif riscv64-linux-gnu-objdump -i 2>&1 | grep 'elf64-big' >/dev/null 2>&1; \
then echo 'riscv64-linux-gnu-'; \
elif riscv64-unknown-linux-gnu-objdump -i 2>&1 | grep 'elf64-big' >/dev/null 2>&1; \
then echo 'riscv64-unknown-linux-gnu-'; \
else echo "***" 1>&2; \
echo "*** Error: Couldn't find a riscv64 version of GCC/binutils." 1>&2; \
echo "*** To turn off this error, run 'gmake TOOLPREFIX= ...'." 1>&2; \
echo "***" 1>&2; exit 1; fi)
endif
CC = $(TOOLPREFIX)gcc
LD = $(TOOLPREFIX)ld
OBJCOPY = $(TOOLPREFIX)objcopy
# OBJDUMP = $(TOOLPREFIX)objdump
CFLAGS = -Wall -Werror -O -fno-omit-frame-pointer -ggdb -gdwarf-2
CFLAGS += -MD
CFLAGS += -mcmodel=medany
CFLAGS += -ffreestanding -fno-common -nostdlib -mno-relax
CFLAGS += -I.
CFLAGS += $(shell $(CC) -fno-stack-protector -E -x c /dev/null >/dev/null 2>&1 && echo -fno-stack-protector)
TARGET_TRIPLE = riscv64gc-unknown-none-elf
RUST_LIB = $R/target/$(TARGET_TRIPLE)/release/librustkernel.a
# Disable PIE when possible (for Ubuntu 16.10 toolchain)
ifneq ($(shell $(CC) -dumpspecs 2>/dev/null | grep -e '[^f]no-pie'),)
CFLAGS += -fno-pie -no-pie
endif
ifneq ($(shell $(CC) -dumpspecs 2>/dev/null | grep -e '[^f]nopie'),)
CFLAGS += -fno-pie -nopie
endif
LDFLAGS = -z max-page-size=4096
.PHONY: clean
kernel: $(OBJS) kernel.ld initcode $(RUST_LIB)
$(LD) $(LDFLAGS) -T kernel.ld -o kernel $(OBJS) $(RUST_LIB)
# $(OBJDUMP) -S kernel > kernel.asm
# $(OBJDUMP) -t kernel | sed '1,/SYMBOL TABLE/d; s/ .* / /; /^$$/d' > kernel.sym
$(RUST_LIB): $(shell find $R/src -type f) $R/Cargo.toml
cargo +nightly -Z unstable-options -C $R build --release
# $(OBJDUMP) -S $(RUST_LIB) > $(RUST_LIB).asm
initcode: initcode.S
$(CC) $(CFLAGS) -march=rv64g -nostdinc -I. -Ikernel -c initcode.S -o initcode.o
$(LD) $(LDFLAGS) -N -e start -Ttext 0 -o initcode.out initcode.o
$(OBJCOPY) -S -O binary initcode.out initcode
# $(OBJDUMP) -S initcode.o > initcode.asm
%.o: %.c *.h
$(CC) $(CFLAGS) -c $<
clean:
rm -f *.tex *.dvi *.idx *.aux *.log *.ind *.ilg \
*.o *.a *.d *.asm *.sym *.gch \
initcode initcode.out kernel
cargo +nightly -Z unstable-options -C $R clean

View File

@ -106,6 +106,8 @@ bread(uint dev, uint blockno)
void
bwrite(struct buf *b)
{
if(!holdingsleep(&b->lock))
panic("bwrite");
virtio_disk_rw(b, 1);
}
@ -114,6 +116,9 @@ bwrite(struct buf *b)
void
brelse(struct buf *b)
{
if(!holdingsleep(&b->lock))
panic("brelse");
releasesleep(&b->lock);
acquire(&bcache.lock);

View File

@ -1,7 +1,3 @@
#include "types.h"
#include "param.h"
#include "sleeplock.h"
struct buf {
int valid; // has data been read from disk?
int disk; // does disk "own" buf?

192
kernel/console.c Normal file
View File

@ -0,0 +1,192 @@
//
// Console input and output, to the uart.
// Reads are line at a time.
// Implements special input characters:
// newline -- end of line
// control-h -- backspace
// control-u -- kill line
// control-d -- end of file
// control-p -- print process list
//
#include <stdarg.h>
#include "types.h"
#include "param.h"
#include "spinlock.h"
#include "sleeplock.h"
#include "fs.h"
#include "file.h"
#include "memlayout.h"
#include "riscv.h"
#include "defs.h"
#include "proc.h"
#define BACKSPACE 0x100
#define C(x) ((x)-'@') // Control-x
//
// send one character to the uart.
// called by printf(), and to echo input characters,
// but not from write().
//
void
consputc(int c)
{
if(c == BACKSPACE){
// if the user typed backspace, overwrite with a space.
uartputc_sync('\b'); uartputc_sync(' '); uartputc_sync('\b');
} else {
uartputc_sync(c);
}
}
struct {
struct spinlock lock;
// input
#define INPUT_BUF_SIZE 128
char buf[INPUT_BUF_SIZE];
uint r; // Read index
uint w; // Write index
uint e; // Edit index
} cons;
//
// user write()s to the console go here.
//
int
consolewrite(int user_src, uint64 src, int n)
{
int i;
for(i = 0; i < n; i++){
char c;
if(either_copyin(&c, user_src, src+i, 1) == -1)
break;
uartputc(c);
}
return i;
}
//
// user read()s from the console go here.
// copy (up to) a whole input line to dst.
// user_dist indicates whether dst is a user
// or kernel address.
//
int
consoleread(int user_dst, uint64 dst, int n)
{
uint target;
int c;
char cbuf;
target = n;
acquire(&cons.lock);
while(n > 0){
// wait until interrupt handler has put some
// input into cons.buffer.
while(cons.r == cons.w){
if(killed(myproc())){
release(&cons.lock);
return -1;
}
sleep(&cons.r, &cons.lock);
}
c = cons.buf[cons.r++ % INPUT_BUF_SIZE];
if(c == C('D')){ // end-of-file
if(n < target){
// Save ^D for next time, to make sure
// caller gets a 0-byte result.
cons.r--;
}
break;
}
// copy the input byte to the user-space buffer.
cbuf = c;
if(either_copyout(user_dst, dst, &cbuf, 1) == -1)
break;
dst++;
--n;
if(c == '\n'){
// a whole line has arrived, return to
// the user-level read().
break;
}
}
release(&cons.lock);
return target - n;
}
//
// the console input interrupt handler.
// uartintr() calls this for input character.
// do erase/kill processing, append to cons.buf,
// wake up consoleread() if a whole line has arrived.
//
void
consoleintr(int c)
{
acquire(&cons.lock);
switch(c){
case C('P'): // Print process list.
procdump();
break;
case C('U'): // Kill line.
while(cons.e != cons.w &&
cons.buf[(cons.e-1) % INPUT_BUF_SIZE] != '\n'){
cons.e--;
consputc(BACKSPACE);
}
break;
case C('H'): // Backspace
case '\x7f': // Delete key
if(cons.e != cons.w){
cons.e--;
consputc(BACKSPACE);
}
break;
default:
if(c != 0 && cons.e-cons.r < INPUT_BUF_SIZE){
c = (c == '\r') ? '\n' : c;
// echo back to the user.
consputc(c);
// store for consumption by consoleread().
cons.buf[cons.e++ % INPUT_BUF_SIZE] = c;
if(c == '\n' || c == C('D') || cons.e-cons.r == INPUT_BUF_SIZE){
// wake up consoleread() if a whole line (or end-of-file)
// has arrived.
cons.w = cons.e;
wakeup(&cons.r);
}
}
break;
}
release(&cons.lock);
}
void
consoleinit(void)
{
initlock(&cons.lock, "cons");
uartinit();
// connect read and write system calls
// to consoleread and consolewrite.
devsw[CONSOLE].read = consoleread;
devsw[CONSOLE].write = consolewrite;
}

View File

@ -1,138 +1,189 @@
#pragma once
#include "types.h"
#include "riscv.h"
#define PIPESIZE 512
struct buf;
struct context;
struct file;
struct inode;
struct spinlock;
struct pipe;
struct proc;
struct spinlock;
struct sleeplock;
struct stat;
struct superblock;
// bio.c
void binit(void);
struct buf *bread(uint, uint);
void brelse(struct buf *);
void bwrite(struct buf *);
void bpin(struct buf *);
void bunpin(struct buf *);
void binit(void);
struct buf* bread(uint, uint);
void brelse(struct buf*);
void bwrite(struct buf*);
void bpin(struct buf*);
void bunpin(struct buf*);
// console.c
void consoleinit(void);
void consoleintr(int);
void consputc(int);
// exec.c
int exec(char *, char **);
int exec(char*, char**);
// file.c
struct file *filealloc(void);
void fileclose(struct file *);
struct file *filedup(struct file *);
void fileinit(void);
int fileread(struct file *, uint64, int n);
int filestat(struct file *, uint64 addr);
int filewrite(struct file *, uint64, int n);
struct file* filealloc(void);
void fileclose(struct file*);
struct file* filedup(struct file*);
void fileinit(void);
int fileread(struct file*, uint64, int n);
int filestat(struct file*, uint64 addr);
int filewrite(struct file*, uint64, int n);
// fs.c
void fsinit(int);
int dirlink(struct inode *, char *, uint);
struct inode *dirlookup(struct inode *, char *, uint *);
struct inode *ialloc(uint, short);
struct inode *idup(struct inode *);
void iinit();
void ilock(struct inode *);
void iput(struct inode *);
void iunlock(struct inode *);
void iunlockput(struct inode *);
void iupdate(struct inode *);
int namecmp(const char *, const char *);
struct inode *namei(char *);
struct inode *nameiparent(char *, char *);
int readi(struct inode *, int, uint64, uint, uint);
void stati(struct inode *, struct stat *);
int writei(struct inode *, int, uint64, uint, uint);
void itrunc(struct inode *);
void fsinit(int);
int dirlink(struct inode*, char*, uint);
struct inode* dirlookup(struct inode*, char*, uint*);
struct inode* ialloc(uint, short);
struct inode* idup(struct inode*);
void iinit();
void ilock(struct inode*);
void iput(struct inode*);
void iunlock(struct inode*);
void iunlockput(struct inode*);
void iupdate(struct inode*);
int namecmp(const char*, const char*);
struct inode* namei(char*);
struct inode* nameiparent(char*, char*);
int readi(struct inode*, int, uint64, uint, uint);
void stati(struct inode*, struct stat*);
int writei(struct inode*, int, uint64, uint, uint);
void itrunc(struct inode*);
// ramdisk.c
void ramdiskinit(void);
void ramdiskintr(void);
void ramdiskrw(struct buf *);
void ramdiskinit(void);
void ramdiskintr(void);
void ramdiskrw(struct buf*);
// kalloc.c
void *kalloc(void);
void kfree(void *);
void* kalloc(void);
void kfree(void *);
void kinit(void);
// log.c
void initlog(int, struct superblock *);
void log_write(struct buf *);
void begin_op(void);
void end_op(void);
void initlog(int, struct superblock*);
void log_write(struct buf*);
void begin_op(void);
void end_op(void);
// pipe.c
int pipealloc(struct file **, struct file **);
int pipealloc(struct file**, struct file**);
void pipeclose(struct pipe*, int);
int piperead(struct pipe*, uint64, int);
int pipewrite(struct pipe*, uint64, int);
// printf.c
__attribute__((noreturn)) void panic(char *s);
void printstr(char *s);
void printint(int n);
void printf(char*, ...);
void panic(char*) __attribute__((noreturn));
void printfinit(void);
// proc.c
void proc_mapstacks(pagetable_t);
pagetable_t proc_pagetable(struct proc *);
void proc_freepagetable(pagetable_t, uint64);
struct cpu *mycpu(void);
struct proc *myproc();
void procinit(void);
void sleep_lock(void *, struct spinlock *);
void userinit(void);
void wakeup(void *chan);
int either_copyout(int user_dst, uint64 dst, void *src, uint64 len);
int either_copyin(void *dst, int user_src, uint64 src, uint64 len);
int cpuid(void);
void exit(int);
int fork(void);
int growproc(int);
void proc_mapstacks(pagetable_t);
pagetable_t proc_pagetable(struct proc *);
void proc_freepagetable(pagetable_t, uint64);
int kill(int);
int killed(struct proc*);
void setkilled(struct proc*);
struct cpu* mycpu(void);
struct cpu* getmycpu(void);
struct proc* myproc();
void procinit(void);
void scheduler(void) __attribute__((noreturn));
void sched(void);
void sleep(void*, struct spinlock*);
void userinit(void);
int wait(uint64);
void wakeup(void*);
void yield(void);
int either_copyout(int user_dst, uint64 dst, void *src, uint64 len);
int either_copyin(void *dst, int user_src, uint64 src, uint64 len);
void procdump(void);
// swtch.S
void swtch(struct context *, struct context *);
void swtch(struct context*, struct context*);
// spinlock.rs
void acquire(struct spinlock *);
void initlock(struct spinlock *, char *);
void release(struct spinlock *);
// spinlock.c
void acquire(struct spinlock*);
int holding(struct spinlock*);
void initlock(struct spinlock*, char*);
void release(struct spinlock*);
void push_off(void);
void pop_off(void);
// sleeplock.c
void acquiresleep(struct sleeplock *);
void releasesleep(struct sleeplock *);
void initsleeplock(struct sleeplock *, char *);
void acquiresleep(struct sleeplock*);
void releasesleep(struct sleeplock*);
int holdingsleep(struct sleeplock*);
void initsleeplock(struct sleeplock*, char*);
// string.c
void *memmove(void *, const void *, uint);
void *memset(void *, uint8, uint64);
char *safestrcpy(char *, const char *, int);
int strlen(const char *);
int strncmp(const char *, const char *, uint);
char *strncpy(char *, const char *, int);
int memcmp(const void*, const void*, uint);
void* memmove(void*, const void*, uint);
void* memset(void*, int, uint);
char* safestrcpy(char*, const char*, int);
int strlen(const char*);
int strncmp(const char*, const char*, uint);
char* strncpy(char*, const char*, int);
// syscall.c
void argint(int, int *);
int argstr(int, char *, int);
void argaddr(int, uint64 *);
int fetchstr(uint64, char *, int);
int fetchaddr(uint64, uint64 *);
void argint(int, int*);
int argstr(int, char*, int);
void argaddr(int, uint64 *);
int fetchstr(uint64, char*, int);
int fetchaddr(uint64, uint64*);
void syscall();
// trap.c
void usertrapret(void);
extern uint ticks;
void trapinit(void);
void trapinithart(void);
extern struct spinlock tickslock;
void usertrapret(void);
// uart.c
void uartinit(void);
void uartintr(void);
void uartputc(int);
void uartputc_sync(int);
int uartgetc(void);
// vm.c
uint64 uvmalloc(pagetable_t, uint64, uint64, int);
void uvmclear(pagetable_t, uint64);
uint64 walkaddr(pagetable_t, uint64);
int copyout(pagetable_t, uint64, char *, uint64);
int copyin(pagetable_t, char *, uint64, uint64);
void kvminit(void);
void kvminithart(void);
void kvmmap(pagetable_t, uint64, uint64, uint64, int);
int mappages(pagetable_t, uint64, uint64, uint64, int);
pagetable_t uvmcreate(void);
void uvmfirst(pagetable_t, uchar *, uint);
uint64 uvmalloc(pagetable_t, uint64, uint64, int);
uint64 uvmdealloc(pagetable_t, uint64, uint64);
int uvmcopy(pagetable_t, pagetable_t, uint64);
void uvmfree(pagetable_t, uint64);
void uvmunmap(pagetable_t, uint64, uint64, int);
void uvmclear(pagetable_t, uint64);
pte_t * walk(pagetable_t, uint64, int);
uint64 walkaddr(pagetable_t, uint64);
int copyout(pagetable_t, uint64, char *, uint64);
int copyin(pagetable_t, char *, uint64, uint64);
int copyinstr(pagetable_t, char *, uint64, uint64);
// plic.c
void plicinit(void);
void plicinithart(void);
int plic_claim(void);
void plic_complete(int);
// virtio_disk.c
void virtio_disk_init(void);
void virtio_disk_rw(struct buf *, int);
void virtio_disk_intr(void);
void virtio_disk_init(void);
void virtio_disk_rw(struct buf *, int);
void virtio_disk_intr(void);
// number of elements in fixed-size array
#define NELEM(x) (sizeof(x) / sizeof((x)[0]))
#define NELEM(x) (sizeof(x)/sizeof((x)[0]))

View File

@ -1,5 +1,3 @@
#include "types.h"
// Format of an ELF executable file
#define ELF_MAGIC 0x464C457FU // "\x7FELF" in little endian

View File

@ -22,7 +22,7 @@ int flags2perm(int flags)
int
exec(char *path, char **argv)
{
char *s;
char *s, *last;
int i, off;
uint64 argc, sz = 0, sp, ustack[MAXARG], stackbase;
struct elfhdr elf;
@ -115,9 +115,11 @@ exec(char *path, char **argv)
p->trapframe->a1 = sp;
// Save program name for debugging.
for (s = path; *s; s++)
;
for(last=s=path; *s; s++)
if(*s == '/')
last = s+1;
safestrcpy(p->name, last, sizeof(p->name));
// Commit to the user image.
oldpagetable = p->pagetable;
p->pagetable = pagetable;

182
kernel/file.c Normal file
View File

@ -0,0 +1,182 @@
//
// Support functions for system calls that involve file descriptors.
//
#include "types.h"
#include "riscv.h"
#include "defs.h"
#include "param.h"
#include "fs.h"
#include "spinlock.h"
#include "sleeplock.h"
#include "file.h"
#include "stat.h"
#include "proc.h"
struct devsw devsw[NDEV];
struct {
struct spinlock lock;
struct file file[NFILE];
} ftable;
void
fileinit(void)
{
initlock(&ftable.lock, "ftable");
}
// Allocate a file structure.
struct file*
filealloc(void)
{
struct file *f;
acquire(&ftable.lock);
for(f = ftable.file; f < ftable.file + NFILE; f++){
if(f->ref == 0){
f->ref = 1;
release(&ftable.lock);
return f;
}
}
release(&ftable.lock);
return 0;
}
// Increment ref count for file f.
struct file*
filedup(struct file *f)
{
acquire(&ftable.lock);
if(f->ref < 1)
panic("filedup");
f->ref++;
release(&ftable.lock);
return f;
}
// Close file f. (Decrement ref count, close when reaches 0.)
void
fileclose(struct file *f)
{
struct file ff;
acquire(&ftable.lock);
if(f->ref < 1)
panic("fileclose");
if(--f->ref > 0){
release(&ftable.lock);
return;
}
ff = *f;
f->ref = 0;
f->type = FD_NONE;
release(&ftable.lock);
if(ff.type == FD_PIPE){
pipeclose(ff.pipe, ff.writable);
} else if(ff.type == FD_INODE || ff.type == FD_DEVICE){
begin_op();
iput(ff.ip);
end_op();
}
}
// Get metadata about file f.
// addr is a user virtual address, pointing to a struct stat.
int
filestat(struct file *f, uint64 addr)
{
struct proc *p = myproc();
struct stat st;
if(f->type == FD_INODE || f->type == FD_DEVICE){
ilock(f->ip);
stati(f->ip, &st);
iunlock(f->ip);
if(copyout(p->pagetable, addr, (char *)&st, sizeof(st)) < 0)
return -1;
return 0;
}
return -1;
}
// Read from file f.
// addr is a user virtual address.
int
fileread(struct file *f, uint64 addr, int n)
{
int r = 0;
if(f->readable == 0)
return -1;
if(f->type == FD_PIPE){
r = piperead(f->pipe, addr, n);
} else if(f->type == FD_DEVICE){
if(f->major < 0 || f->major >= NDEV || !devsw[f->major].read)
return -1;
r = devsw[f->major].read(1, addr, n);
} else if(f->type == FD_INODE){
ilock(f->ip);
if((r = readi(f->ip, 1, addr, f->off, n)) > 0)
f->off += r;
iunlock(f->ip);
} else {
panic("fileread");
}
return r;
}
// Write to file f.
// addr is a user virtual address.
int
filewrite(struct file *f, uint64 addr, int n)
{
int r, ret = 0;
if(f->writable == 0)
return -1;
if(f->type == FD_PIPE){
ret = pipewrite(f->pipe, addr, n);
} else if(f->type == FD_DEVICE){
if(f->major < 0 || f->major >= NDEV || !devsw[f->major].write)
return -1;
ret = devsw[f->major].write(1, addr, n);
} else if(f->type == FD_INODE){
// write a few blocks at a time to avoid exceeding
// the maximum log transaction size, including
// i-node, indirect block, allocation blocks,
// and 2 blocks of slop for non-aligned writes.
// this really belongs lower down, since writei()
// might be writing a device like the console.
int max = ((MAXOPBLOCKS-1-1-2) / 2) * BSIZE;
int i = 0;
while(i < n){
int n1 = n - i;
if(n1 > max)
n1 = max;
begin_op();
ilock(f->ip);
if ((r = writei(f->ip, 1, addr + i, f->off, n1)) > 0)
f->off += r;
iunlock(f->ip);
end_op();
if(r != n1){
// error from writei
break;
}
i += r;
}
ret = (i == n ? n : -1);
} else {
panic("filewrite");
}
return ret;
}

View File

@ -1,6 +1,3 @@
#include "types.h"
#include "param.h"
struct file {
enum { FD_NONE, FD_PIPE, FD_INODE, FD_DEVICE } type;
int ref; // reference count

View File

@ -83,7 +83,7 @@ balloc(uint dev)
}
brelse(bp);
}
printstr("balloc: out of blocks\n");
printf("balloc: out of blocks\n");
return 0;
}
@ -214,7 +214,7 @@ ialloc(uint dev, short type)
}
brelse(bp);
}
printstr("ialloc: no inodes\n");
printf("ialloc: no inodes\n");
return 0;
}
@ -317,11 +317,10 @@ ilock(struct inode *ip)
}
// Unlock the given inode.
// Caller should hold ip->lock
void
iunlock(struct inode *ip)
{
if (ip == 0 || ip->ref < 1)
if(ip == 0 || !holdingsleep(&ip->lock) || ip->ref < 1)
panic("iunlock");
releasesleep(&ip->lock);

View File

@ -1,7 +1,6 @@
// On-disk file system format.
// Both the kernel and user programs use this header file.
#include "types.h"
#define ROOTINO 1 // root i-number
#define BSIZE 1024 // block size

82
kernel/kalloc.c Normal file
View File

@ -0,0 +1,82 @@
// Physical memory allocator, for user processes,
// kernel stacks, page-table pages,
// and pipe buffers. Allocates whole 4096-byte pages.
#include "types.h"
#include "param.h"
#include "memlayout.h"
#include "spinlock.h"
#include "riscv.h"
#include "defs.h"
void freerange(void *pa_start, void *pa_end);
extern char end[]; // first address after kernel.
// defined by kernel.ld.
struct run {
struct run *next;
};
struct {
struct spinlock lock;
struct run *freelist;
} kmem;
void
kinit()
{
initlock(&kmem.lock, "kmem");
freerange(end, (void*)PHYSTOP);
}
void
freerange(void *pa_start, void *pa_end)
{
char *p;
p = (char*)PGROUNDUP((uint64)pa_start);
for(; p + PGSIZE <= (char*)pa_end; p += PGSIZE)
kfree(p);
}
// Free the page of physical memory pointed at by pa,
// which normally should have been returned by a
// call to kalloc(). (The exception is when
// initializing the allocator; see kinit above.)
void
kfree(void *pa)
{
struct run *r;
if(((uint64)pa % PGSIZE) != 0 || (char*)pa < end || (uint64)pa >= PHYSTOP)
panic("kfree");
// Fill with junk to catch dangling refs.
memset(pa, 1, PGSIZE);
r = (struct run*)pa;
acquire(&kmem.lock);
r->next = kmem.freelist;
kmem.freelist = r;
release(&kmem.lock);
}
// Allocate one 4096-byte page of physical memory.
// Returns a pointer that the kernel can use.
// Returns 0 if the memory cannot be allocated.
void *
kalloc(void)
{
struct run *r;
acquire(&kmem.lock);
r = kmem.freelist;
if(r)
kmem.freelist = r->next;
release(&kmem.lock);
if(r)
memset((char*)r, 5, PGSIZE); // fill with junk
return (void*)r;
}

View File

@ -129,10 +129,10 @@ begin_op(void)
acquire(&log.lock);
while(1){
if(log.committing){
sleep_lock(&log, &log.lock);
sleep(&log, &log.lock);
} else if(log.lh.n + (log.outstanding+1)*MAXOPBLOCKS > LOGSIZE){
// this op might exhaust log space; wait for commit.
sleep_lock(&log, &log.lock);
sleep(&log, &log.lock);
} else {
log.outstanding += 1;
release(&log.lock);

45
kernel/main.c Normal file
View File

@ -0,0 +1,45 @@
#include "types.h"
#include "param.h"
#include "memlayout.h"
#include "riscv.h"
#include "defs.h"
volatile static int started = 0;
// start() jumps here in supervisor mode on all CPUs.
void
main()
{
if(cpuid() == 0){
consoleinit();
printfinit();
printf("\n");
printf("xv6 kernel is booting\n");
printf("\n");
kinit(); // physical page allocator
kvminit(); // create kernel page table
kvminithart(); // turn on paging
procinit(); // process table
trapinit(); // trap vectors
trapinithart(); // install kernel trap vector
plicinit(); // set up interrupt controller
plicinithart(); // ask PLIC for device interrupts
binit(); // buffer cache
iinit(); // inode table
fileinit(); // file table
virtio_disk_init(); // emulated hard disk
userinit(); // first user process
__sync_synchronize();
started = 1;
} else {
while(started == 0)
;
__sync_synchronize();
printf("hart %d starting\n", cpuid());
kvminithart(); // turn on paging
trapinithart(); // install kernel trap vector
plicinithart(); // ask PLIC for device interrupts
}
scheduler();
}

View File

@ -17,9 +17,6 @@
// end -- start of kernel page allocation area
// PHYSTOP -- end RAM used by the kernel
// QEMU test interface. Used for poweroff and on.
#define QEMU_POWER 0x100000
// qemu puts UART registers here in physical memory.
#define UART0 0x10000000L
#define UART0_IRQ 10

130
kernel/pipe.c Normal file
View File

@ -0,0 +1,130 @@
#include "types.h"
#include "riscv.h"
#include "defs.h"
#include "param.h"
#include "spinlock.h"
#include "proc.h"
#include "fs.h"
#include "sleeplock.h"
#include "file.h"
#define PIPESIZE 512
struct pipe {
struct spinlock lock;
char data[PIPESIZE];
uint nread; // number of bytes read
uint nwrite; // number of bytes written
int readopen; // read fd is still open
int writeopen; // write fd is still open
};
int
pipealloc(struct file **f0, struct file **f1)
{
struct pipe *pi;
pi = 0;
*f0 = *f1 = 0;
if((*f0 = filealloc()) == 0 || (*f1 = filealloc()) == 0)
goto bad;
if((pi = (struct pipe*)kalloc()) == 0)
goto bad;
pi->readopen = 1;
pi->writeopen = 1;
pi->nwrite = 0;
pi->nread = 0;
initlock(&pi->lock, "pipe");
(*f0)->type = FD_PIPE;
(*f0)->readable = 1;
(*f0)->writable = 0;
(*f0)->pipe = pi;
(*f1)->type = FD_PIPE;
(*f1)->readable = 0;
(*f1)->writable = 1;
(*f1)->pipe = pi;
return 0;
bad:
if(pi)
kfree((char*)pi);
if(*f0)
fileclose(*f0);
if(*f1)
fileclose(*f1);
return -1;
}
void
pipeclose(struct pipe *pi, int writable)
{
acquire(&pi->lock);
if(writable){
pi->writeopen = 0;
wakeup(&pi->nread);
} else {
pi->readopen = 0;
wakeup(&pi->nwrite);
}
if(pi->readopen == 0 && pi->writeopen == 0){
release(&pi->lock);
kfree((char*)pi);
} else
release(&pi->lock);
}
int
pipewrite(struct pipe *pi, uint64 addr, int n)
{
int i = 0;
struct proc *pr = myproc();
acquire(&pi->lock);
while(i < n){
if(pi->readopen == 0 || killed(pr)){
release(&pi->lock);
return -1;
}
if(pi->nwrite == pi->nread + PIPESIZE){ //DOC: pipewrite-full
wakeup(&pi->nread);
sleep(&pi->nwrite, &pi->lock);
} else {
char ch;
if(copyin(pr->pagetable, &ch, addr + i, 1) == -1)
break;
pi->data[pi->nwrite++ % PIPESIZE] = ch;
i++;
}
}
wakeup(&pi->nread);
release(&pi->lock);
return i;
}
int
piperead(struct pipe *pi, uint64 addr, int n)
{
int i;
struct proc *pr = myproc();
char ch;
acquire(&pi->lock);
while(pi->nread == pi->nwrite && pi->writeopen){ //DOC: pipe-empty
if(killed(pr)){
release(&pi->lock);
return -1;
}
sleep(&pi->nread, &pi->lock); //DOC: piperead-sleep
}
for(i = 0; i < n; i++){ //DOC: piperead-copy
if(pi->nread == pi->nwrite)
break;
ch = pi->data[pi->nread++ % PIPESIZE];
if(copyout(pr->pagetable, addr + i, &ch, 1) == -1)
break;
}
wakeup(&pi->nwrite); //DOC: piperead-wakeup
release(&pi->lock);
return i;
}

47
kernel/plic.c Normal file
View File

@ -0,0 +1,47 @@
#include "types.h"
#include "param.h"
#include "memlayout.h"
#include "riscv.h"
#include "defs.h"
//
// the riscv Platform Level Interrupt Controller (PLIC).
//
void
plicinit(void)
{
// set desired IRQ priorities non-zero (otherwise disabled).
*(uint32*)(PLIC + UART0_IRQ*4) = 1;
*(uint32*)(PLIC + VIRTIO0_IRQ*4) = 1;
}
void
plicinithart(void)
{
int hart = cpuid();
// set enable bits for this hart's S-mode
// for the uart and virtio disk.
*(uint32*)PLIC_SENABLE(hart) = (1 << UART0_IRQ) | (1 << VIRTIO0_IRQ);
// set this hart's S-mode priority threshold to 0.
*(uint32*)PLIC_SPRIORITY(hart) = 0;
}
// ask the PLIC what interrupt we should serve.
int
plic_claim(void)
{
int hart = cpuid();
int irq = *(uint32*)PLIC_SCLAIM(hart);
return irq;
}
// tell the PLIC we've served this IRQ.
void
plic_complete(int irq)
{
int hart = cpuid();
*(uint32*)PLIC_SCLAIM(hart) = irq;
}

135
kernel/printf.c Normal file
View File

@ -0,0 +1,135 @@
//
// formatted console output -- printf, panic.
//
#include <stdarg.h>
#include "types.h"
#include "param.h"
#include "spinlock.h"
#include "sleeplock.h"
#include "fs.h"
#include "file.h"
#include "memlayout.h"
#include "riscv.h"
#include "defs.h"
#include "proc.h"
volatile int panicked = 0;
// lock to avoid interleaving concurrent printf's.
static struct {
struct spinlock lock;
int locking;
} pr;
static char digits[] = "0123456789abcdef";
static void
printint(int xx, int base, int sign)
{
char buf[16];
int i;
uint x;
if(sign && (sign = xx < 0))
x = -xx;
else
x = xx;
i = 0;
do {
buf[i++] = digits[x % base];
} while((x /= base) != 0);
if(sign)
buf[i++] = '-';
while(--i >= 0)
consputc(buf[i]);
}
static void
printptr(uint64 x)
{
int i;
consputc('0');
consputc('x');
for (i = 0; i < (sizeof(uint64) * 2); i++, x <<= 4)
consputc(digits[x >> (sizeof(uint64) * 8 - 4)]);
}
// Print to the console. only understands %d, %x, %p, %s.
void
printf(char *fmt, ...)
{
va_list ap;
int i, c, locking;
char *s;
locking = pr.locking;
if(locking)
acquire(&pr.lock);
if (fmt == 0)
panic("null fmt");
va_start(ap, fmt);
for(i = 0; (c = fmt[i] & 0xff) != 0; i++){
if(c != '%'){
consputc(c);
continue;
}
c = fmt[++i] & 0xff;
if(c == 0)
break;
switch(c){
case 'd':
printint(va_arg(ap, int), 10, 1);
break;
case 'x':
printint(va_arg(ap, int), 16, 1);
break;
case 'p':
printptr(va_arg(ap, uint64));
break;
case 's':
if((s = va_arg(ap, char*)) == 0)
s = "(null)";
for(; *s; s++)
consputc(*s);
break;
case '%':
consputc('%');
break;
default:
// Print unknown % sequence to draw attention.
consputc('%');
consputc(c);
break;
}
}
va_end(ap);
if(locking)
release(&pr.lock);
}
void
panic(char *s)
{
pr.locking = 0;
printf("panic: ");
printf(s);
printf("\n");
panicked = 1; // freeze uart output from other CPUs
for(;;)
;
}
void
printfinit(void)
{
initlock(&pr.lock, "pr");
pr.locking = 1;
}

683
kernel/proc.c Normal file
View File

@ -0,0 +1,683 @@
#include "types.h"
#include "param.h"
#include "memlayout.h"
#include "riscv.h"
#include "spinlock.h"
#include "proc.h"
#include "defs.h"
struct cpu cpus[NCPU];
struct proc proc[NPROC];
struct proc *initproc;
int nextpid = 1;
struct spinlock pid_lock;
extern void forkret(void);
static void freeproc(struct proc *p);
extern char trampoline[]; // trampoline.S
// helps ensure that wakeups of wait()ing
// parents are not lost. helps obey the
// memory model when using p->parent.
// must be acquired before any p->lock.
struct spinlock wait_lock;
// Allocate a page for each process's kernel stack.
// Map it high in memory, followed by an invalid
// guard page.
void
proc_mapstacks(pagetable_t kpgtbl)
{
struct proc *p;
for(p = proc; p < &proc[NPROC]; p++) {
char *pa = kalloc();
if(pa == 0)
panic("kalloc");
uint64 va = KSTACK((int) (p - proc));
kvmmap(kpgtbl, va, (uint64)pa, PGSIZE, PTE_R | PTE_W);
}
}
// initialize the proc table.
void
procinit(void)
{
struct proc *p;
initlock(&pid_lock, "nextpid");
initlock(&wait_lock, "wait_lock");
for(p = proc; p < &proc[NPROC]; p++) {
initlock(&p->lock, "proc");
p->state = UNUSED;
p->kstack = KSTACK((int) (p - proc));
}
}
// Must be called with interrupts disabled,
// to prevent race with process being moved
// to a different CPU.
int
cpuid()
{
int id = r_tp();
return id;
}
// Return this CPU's cpu struct.
// Interrupts must be disabled.
struct cpu*
mycpu(void)
{
int id = cpuid();
struct cpu *c = &cpus[id];
return c;
}
// Return the current struct proc *, or zero if none.
struct proc*
myproc(void)
{
push_off();
struct cpu *c = mycpu();
struct proc *p = c->proc;
pop_off();
return p;
}
int
allocpid()
{
int pid;
acquire(&pid_lock);
pid = nextpid;
nextpid = nextpid + 1;
release(&pid_lock);
return pid;
}
// Look in the process table for an UNUSED proc.
// If found, initialize state required to run in the kernel,
// and return with p->lock held.
// If there are no free procs, or a memory allocation fails, return 0.
static struct proc*
allocproc(void)
{
struct proc *p;
for(p = proc; p < &proc[NPROC]; p++) {
acquire(&p->lock);
if(p->state == UNUSED) {
goto found;
} else {
release(&p->lock);
}
}
return 0;
found:
p->pid = allocpid();
p->state = USED;
// Allocate a trapframe page.
if((p->trapframe = (struct trapframe *)kalloc()) == 0){
freeproc(p);
release(&p->lock);
return 0;
}
// An empty user page table.
p->pagetable = proc_pagetable(p);
if(p->pagetable == 0){
freeproc(p);
release(&p->lock);
return 0;
}
// Set up new context to start executing at forkret,
// which returns to user space.
memset(&p->context, 0, sizeof(p->context));
p->context.ra = (uint64)forkret;
p->context.sp = p->kstack + PGSIZE;
return p;
}
// free a proc structure and the data hanging from it,
// including user pages.
// p->lock must be held.
static void
freeproc(struct proc *p)
{
if(p->trapframe)
kfree((void*)p->trapframe);
p->trapframe = 0;
if(p->pagetable)
proc_freepagetable(p->pagetable, p->sz);
p->pagetable = 0;
p->sz = 0;
p->pid = 0;
p->parent = 0;
p->name[0] = 0;
p->chan = 0;
p->killed = 0;
p->xstate = 0;
p->state = UNUSED;
}
// Create a user page table for a given process, with no user memory,
// but with trampoline and trapframe pages.
pagetable_t
proc_pagetable(struct proc *p)
{
pagetable_t pagetable;
// An empty page table.
pagetable = uvmcreate();
if(pagetable == 0)
return 0;
// map the trampoline code (for system call return)
// at the highest user virtual address.
// only the supervisor uses it, on the way
// to/from user space, so not PTE_U.
if(mappages(pagetable, TRAMPOLINE, PGSIZE,
(uint64)trampoline, PTE_R | PTE_X) < 0){
uvmfree(pagetable, 0);
return 0;
}
// map the trapframe page just below the trampoline page, for
// trampoline.S.
if(mappages(pagetable, TRAPFRAME, PGSIZE,
(uint64)(p->trapframe), PTE_R | PTE_W) < 0){
uvmunmap(pagetable, TRAMPOLINE, 1, 0);
uvmfree(pagetable, 0);
return 0;
}
return pagetable;
}
// Free a process's page table, and free the
// physical memory it refers to.
void
proc_freepagetable(pagetable_t pagetable, uint64 sz)
{
uvmunmap(pagetable, TRAMPOLINE, 1, 0);
uvmunmap(pagetable, TRAPFRAME, 1, 0);
uvmfree(pagetable, sz);
}
// a user program that calls exec("/init")
// assembled from ../user/initcode.S
// od -t xC ../user/initcode
uchar initcode[] = {
0x17, 0x05, 0x00, 0x00, 0x13, 0x05, 0x45, 0x02,
0x97, 0x05, 0x00, 0x00, 0x93, 0x85, 0x35, 0x02,
0x93, 0x08, 0x70, 0x00, 0x73, 0x00, 0x00, 0x00,
0x93, 0x08, 0x20, 0x00, 0x73, 0x00, 0x00, 0x00,
0xef, 0xf0, 0x9f, 0xff, 0x2f, 0x69, 0x6e, 0x69,
0x74, 0x00, 0x00, 0x24, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00
};
// Set up first user process.
void
userinit(void)
{
struct proc *p;
p = allocproc();
initproc = p;
// allocate one user page and copy initcode's instructions
// and data into it.
uvmfirst(p->pagetable, initcode, sizeof(initcode));
p->sz = PGSIZE;
// prepare for the very first "return" from kernel to user.
p->trapframe->epc = 0; // user program counter
p->trapframe->sp = PGSIZE; // user stack pointer
safestrcpy(p->name, "initcode", sizeof(p->name));
p->cwd = namei("/");
p->state = RUNNABLE;
release(&p->lock);
}
// Grow or shrink user memory by n bytes.
// Return 0 on success, -1 on failure.
int
growproc(int n)
{
uint64 sz;
struct proc *p = myproc();
sz = p->sz;
if(n > 0){
if((sz = uvmalloc(p->pagetable, sz, sz + n, PTE_W)) == 0) {
return -1;
}
} else if(n < 0){
sz = uvmdealloc(p->pagetable, sz, sz + n);
}
p->sz = sz;
return 0;
}
// Create a new process, copying the parent.
// Sets up child kernel stack to return as if from fork() system call.
int
fork(void)
{
int i, pid;
struct proc *np;
struct proc *p = myproc();
// Allocate process.
if((np = allocproc()) == 0){
return -1;
}
// Copy user memory from parent to child.
if(uvmcopy(p->pagetable, np->pagetable, p->sz) < 0){
freeproc(np);
release(&np->lock);
return -1;
}
np->sz = p->sz;
// copy saved user registers.
*(np->trapframe) = *(p->trapframe);
// Cause fork to return 0 in the child.
np->trapframe->a0 = 0;
// increment reference counts on open file descriptors.
for(i = 0; i < NOFILE; i++)
if(p->ofile[i])
np->ofile[i] = filedup(p->ofile[i]);
np->cwd = idup(p->cwd);
safestrcpy(np->name, p->name, sizeof(p->name));
pid = np->pid;
release(&np->lock);
acquire(&wait_lock);
np->parent = p;
release(&wait_lock);
acquire(&np->lock);
np->state = RUNNABLE;
release(&np->lock);
return pid;
}
// Pass p's abandoned children to init.
// Caller must hold wait_lock.
void
reparent(struct proc *p)
{
struct proc *pp;
for(pp = proc; pp < &proc[NPROC]; pp++){
if(pp->parent == p){
pp->parent = initproc;
wakeup(initproc);
}
}
}
// Exit the current process. Does not return.
// An exited process remains in the zombie state
// until its parent calls wait().
void
exit(int status)
{
struct proc *p = myproc();
if(p == initproc)
panic("init exiting");
// Close all open files.
for(int fd = 0; fd < NOFILE; fd++){
if(p->ofile[fd]){
struct file *f = p->ofile[fd];
fileclose(f);
p->ofile[fd] = 0;
}
}
begin_op();
iput(p->cwd);
end_op();
p->cwd = 0;
acquire(&wait_lock);
// Give any children to init.
reparent(p);
// Parent might be sleeping in wait().
wakeup(p->parent);
acquire(&p->lock);
p->xstate = status;
p->state = ZOMBIE;
release(&wait_lock);
// Jump into the scheduler, never to return.
sched();
panic("zombie exit");
}
// Wait for a child process to exit and return its pid.
// Return -1 if this process has no children.
int
wait(uint64 addr)
{
struct proc *pp;
int havekids, pid;
struct proc *p = myproc();
acquire(&wait_lock);
for(;;){
// Scan through table looking for exited children.
havekids = 0;
for(pp = proc; pp < &proc[NPROC]; pp++){
if(pp->parent == p){
// make sure the child isn't still in exit() or swtch().
acquire(&pp->lock);
havekids = 1;
if(pp->state == ZOMBIE){
// Found one.
pid = pp->pid;
if(addr != 0 && copyout(p->pagetable, addr, (char *)&pp->xstate,
sizeof(pp->xstate)) < 0) {
release(&pp->lock);
release(&wait_lock);
return -1;
}
freeproc(pp);
release(&pp->lock);
release(&wait_lock);
return pid;
}
release(&pp->lock);
}
}
// No point waiting if we don't have any children.
if(!havekids || killed(p)){
release(&wait_lock);
return -1;
}
// Wait for a child to exit.
sleep(p, &wait_lock); //DOC: wait-sleep
}
}
// Per-CPU process scheduler.
// Each CPU calls scheduler() after setting itself up.
// Scheduler never returns. It loops, doing:
// - choose a process to run.
// - swtch to start running that process.
// - eventually that process transfers control
// via swtch back to the scheduler.
void
scheduler(void)
{
struct proc *p;
struct cpu *c = mycpu();
c->proc = 0;
for(;;){
// Avoid deadlock by ensuring that devices can interrupt.
intr_on();
for(p = proc; p < &proc[NPROC]; p++) {
acquire(&p->lock);
if(p->state == RUNNABLE) {
// Switch to chosen process. It is the process's job
// to release its lock and then reacquire it
// before jumping back to us.
p->state = RUNNING;
c->proc = p;
swtch(&c->context, &p->context);
// Process is done running for now.
// It should have changed its p->state before coming back.
c->proc = 0;
}
release(&p->lock);
}
}
}
// Switch to scheduler. Must hold only p->lock
// and have changed proc->state. Saves and restores
// intena because intena is a property of this
// kernel thread, not this CPU. It should
// be proc->intena and proc->noff, but that would
// break in the few places where a lock is held but
// there's no process.
void
sched(void)
{
int intena;
struct proc *p = myproc();
if(!holding(&p->lock))
panic("sched p->lock");
if(mycpu()->noff != 1)
panic("sched locks");
if(p->state == RUNNING)
panic("sched running");
if(intr_get())
panic("sched interruptible");
intena = mycpu()->intena;
swtch(&p->context, &mycpu()->context);
mycpu()->intena = intena;
}
// Give up the CPU for one scheduling round.
void
yield(void)
{
struct proc *p = myproc();
acquire(&p->lock);
p->state = RUNNABLE;
sched();
release(&p->lock);
}
// A fork child's very first scheduling by scheduler()
// will swtch to forkret.
void
forkret(void)
{
static int first = 1;
// Still holding p->lock from scheduler.
release(&myproc()->lock);
if (first) {
// File system initialization must be run in the context of a
// regular process (e.g., because it calls sleep), and thus cannot
// be run from main().
first = 0;
fsinit(ROOTDEV);
}
usertrapret();
}
// Atomically release lock and sleep on chan.
// Reacquires lock when awakened.
void
sleep(void *chan, struct spinlock *lk)
{
struct proc *p = myproc();
// Must acquire p->lock in order to
// change p->state and then call sched.
// Once we hold p->lock, we can be
// guaranteed that we won't miss any wakeup
// (wakeup locks p->lock),
// so it's okay to release lk.
acquire(&p->lock); //DOC: sleeplock1
release(lk);
// Go to sleep.
p->chan = chan;
p->state = SLEEPING;
sched();
// Tidy up.
p->chan = 0;
// Reacquire original lock.
release(&p->lock);
acquire(lk);
}
// Wake up all processes sleeping on chan.
// Must be called without any p->lock.
void
wakeup(void *chan)
{
struct proc *p;
for(p = proc; p < &proc[NPROC]; p++) {
if(p != myproc()){
acquire(&p->lock);
if(p->state == SLEEPING && p->chan == chan) {
p->state = RUNNABLE;
}
release(&p->lock);
}
}
}
// Kill the process with the given pid.
// The victim won't exit until it tries to return
// to user space (see usertrap() in trap.c).
int
kill(int pid)
{
struct proc *p;
for(p = proc; p < &proc[NPROC]; p++){
acquire(&p->lock);
if(p->pid == pid){
p->killed = 1;
if(p->state == SLEEPING){
// Wake process from sleep().
p->state = RUNNABLE;
}
release(&p->lock);
return 0;
}
release(&p->lock);
}
return -1;
}
void
setkilled(struct proc *p)
{
acquire(&p->lock);
p->killed = 1;
release(&p->lock);
}
int
killed(struct proc *p)
{
int k;
acquire(&p->lock);
k = p->killed;
release(&p->lock);
return k;
}
// Copy to either a user address, or kernel address,
// depending on usr_dst.
// Returns 0 on success, -1 on error.
int
either_copyout(int user_dst, uint64 dst, void *src, uint64 len)
{
struct proc *p = myproc();
if(user_dst){
return copyout(p->pagetable, dst, src, len);
} else {
memmove((char *)dst, src, len);
return 0;
}
}
// Copy from either a user address, or kernel address,
// depending on usr_src.
// Returns 0 on success, -1 on error.
int
either_copyin(void *dst, int user_src, uint64 src, uint64 len)
{
struct proc *p = myproc();
if(user_src){
return copyin(p->pagetable, dst, src, len);
} else {
memmove(dst, (char*)src, len);
return 0;
}
}
// Print a process listing to console. For debugging.
// Runs when user types ^P on console.
// No lock to avoid wedging a stuck machine further.
void
procdump(void)
{
static char *states[] = {
[UNUSED] "unused",
[USED] "used",
[SLEEPING] "sleep ",
[RUNNABLE] "runble",
[RUNNING] "run ",
[ZOMBIE] "zombie"
};
struct proc *p;
char *state;
printf("\n");
for(p = proc; p < &proc[NPROC]; p++){
if(p->state == UNUSED)
continue;
if(p->state >= 0 && p->state < NELEM(states) && states[p->state])
state = states[p->state];
else
state = "???";
printf("%d %s %s", p->pid, state, p->name);
printf("\n");
}
}

View File

@ -1,8 +1,3 @@
#include "types.h"
#include "param.h"
#include "riscv.h"
#include "spinlock.h"
// Saved registers for kernel context switches.
struct context {
uint64 ra;
@ -27,10 +22,12 @@ struct context {
struct cpu {
struct proc *proc; // The process running on this cpu, or null.
struct context context; // swtch() here to enter scheduler().
int interrupt_disable_layers; // Depth of push_off() nesting.
int previous_interrupts_enabled; // Were interrupts enabled before push_off()?
int noff; // Depth of push_off() nesting.
int intena; // Were interrupts enabled before push_off()?
};
extern struct cpu cpus[NCPU];
// per-process data for the trap handling code in trampoline.S.
// sits in a page by itself just under the trampoline page in the
// user page table. not specially mapped in the kernel page table.
@ -106,4 +103,5 @@ struct proc {
struct context context; // swtch() here to run process
struct file *ofile[NOFILE]; // Open files
struct inode *cwd; // Current directory
char name[16]; // Process name (debugging)
};

View File

@ -12,14 +12,18 @@
#include "fs.h"
#include "buf.h"
void ramdiskinit(void);
void
ramdiskinit(void)
{
}
// If B_DIRTY is set, write buf to disk, clear B_DIRTY, set B_VALID.
// Else if B_VALID is not set, read buf from disk, set B_VALID.
// Caller should hold b->lock
void
ramdiskrw(struct buf *b)
{
if(!holdingsleep(&b->lock))
panic("ramdiskrw: buf not locked");
if((b->flags & (B_VALID|B_DIRTY)) == B_VALID)
panic("ramdiskrw: nothing to do");

View File

@ -1,7 +1,4 @@
#ifndef __ASSEMBLER__
#pragma once
#include "./types.h"
// which hart (core) is this?
static inline uint64

View File

@ -1,3 +0,0 @@
[build]
target = "riscv64gc-unknown-none-elf"
rustflags = ["-Csoft-float=n"]

View File

@ -1,16 +0,0 @@
# This file is automatically @generated by Cargo.
# It is not intended for manual editing.
version = 3
[[package]]
name = "arrayvec"
version = "0.7.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "96d30a06541fbafbc7f82ed10c06164cfbd2c401138f6addd8404629c4b16711"
[[package]]
name = "rustkernel"
version = "0.1.0"
dependencies = [
"arrayvec",
]

View File

@ -1,19 +0,0 @@
[package]
name = "rustkernel"
version = "0.1.0"
edition = "2021"
authors = ["Garen Tyler <garentyler@garen.dev>"]
repository = "https://github.com/garentyler/xv6-riscv"
readme = "../../README.md"
license = "LGPL-3.0-only"
[dependencies]
arrayvec = { version = "0.7.4", default-features = false }
[features]
default = ["qemu-riscv64"]
qemu-riscv64 = []
milk-v = []
[lib]
crate-type = ["staticlib"]

View File

@ -1,216 +0,0 @@
//! Console input and output, to the uart.
//
// Reads are a line at a time.
// Implements special input characters:
// - newline: end of line
// - ctrl-h: backspace
// - ctrl-u: kill line
// - ctrl-d: end of file
// - ctrl-p: print process list
pub mod printf;
use crate::{
fs::file::{devsw, CONSOLE},
hal::arch::virtual_memory::{either_copyin, either_copyout},
hal::hardware::uart::BufferedUart,
proc::{
process::{procdump, Process},
scheduler::wakeup,
},
sync::mutex::Mutex,
};
use core::ptr::addr_of_mut;
pub static UART0: &BufferedUart = &crate::hal::platform::UARTS[0].1;
pub const BACKSPACE: u8 = 0x00;
pub const INPUT_BUF_SIZE: usize = 128;
pub struct Console {
pub buffer: [u8; INPUT_BUF_SIZE],
pub read_index: usize,
pub write_index: usize,
pub edit_index: usize,
}
impl Console {
pub fn read_byte(&self) -> &u8 {
&self.buffer[self.read_index % self.buffer.len()]
}
pub fn write_byte(&mut self) -> &mut u8 {
let i = self.write_index % self.buffer.len();
&mut self.buffer[i]
}
pub fn edit_byte(&mut self) -> &mut u8 {
let i = self.edit_index % self.buffer.len();
&mut self.buffer[i]
}
}
impl core::fmt::Write for Console {
fn write_str(&mut self, s: &str) -> core::fmt::Result {
UART0.writer().write_str(s)
}
}
#[no_mangle]
pub static cons: Mutex<Console> = Mutex::new(Console {
buffer: [0u8; INPUT_BUF_SIZE],
read_index: 0,
write_index: 0,
edit_index: 0,
});
/// ctrl-x
const fn ctrl_x(x: u8) -> u8 {
x - b'@'
}
/// Send one character to the UART.
///
/// Called by printf(), and to echo input
/// characters but not from write().
pub fn consputc(c: u8) {
if c == BACKSPACE {
// If the user typed backspace, overwrite with a space.
UART0.write_slice_buffered(b"\x08 \x08");
} else {
UART0.write_byte_buffered(c);
}
}
/// User write()s to the console go here.
pub fn consolewrite(user_src: i32, src: u64, n: i32) -> i32 {
unsafe {
for i in 0..n {
let mut c = 0i8;
if either_copyin(
addr_of_mut!(c).cast(),
user_src,
src as usize + i as u32 as usize,
1,
) == -1
{
return i;
} else {
UART0.write_byte_buffered(c as u8);
}
}
0
}
}
/// User read()s from the console go here.
///
/// Copy (up to) a whole input line to dst.
/// user_dst indicates whether dst is a user
/// or kernel address.
pub fn consoleread(user_dst: i32, mut dst: u64, mut n: i32) -> i32 {
unsafe {
let target = n;
let mut c;
let mut cbuf;
let mut console = cons.lock_spinning();
while n > 0 {
// Wait until interrupt handler has put
// some input into cons.buffer.
while console.read_index == console.write_index {
if Process::current().unwrap().is_killed() {
// cons.lock.unlock();
return -1;
}
let channel = addr_of_mut!(console.read_index).cast();
console.sleep(channel);
}
c = *console.read_byte();
console.read_index += 1;
// ctrl-D or EOF
if c == ctrl_x(b'D') {
if n < target {
// Save ctrl-D for next time, to make
// sure caller gets a 0-byte result.
console.read_index -= 1;
}
break;
}
// Copy the input byte to the user-space buffer.
cbuf = c;
if either_copyout(user_dst, dst as usize, addr_of_mut!(cbuf).cast(), 1) == -1 {
break;
}
dst += 1;
n -= 1;
if c == b'\n' {
// A whole line has arrived,
// return to the user-level read().
break;
}
}
// cons.lock.unlock();
target - n
}
}
pub unsafe fn consoleinit() {
UART0.initialize();
// Connect read and write syscalls
// to consoleread and consolewrite.
devsw[CONSOLE].read = Some(consoleread);
devsw[CONSOLE].write = Some(consolewrite);
}
/// The console input interrupt handler.
///
/// uartintr() calls this for input character.
/// Do erase/kill processing, then append to cons.buf.
/// Wake up consoleread() if a whole line has arrived.
pub fn consoleintr(mut c: u8) {
let mut console = cons.lock_spinning();
if c == ctrl_x(b'P') {
// Print process list.
unsafe { procdump() };
} else if c == ctrl_x(b'U') {
// Kill line.
while console.edit_index != console.write_index
&& console.buffer[(console.edit_index - 1) % INPUT_BUF_SIZE] != b'\n'
{
console.edit_index -= 1;
consputc(BACKSPACE);
}
} else if c == ctrl_x(b'H') || c == 0x7f {
// Backspace or delete key.
if console.edit_index != console.write_index {
console.edit_index -= 1;
consputc(BACKSPACE);
}
} else if c != 0 && console.edit_index - console.read_index < INPUT_BUF_SIZE {
c = if c == b'\r' { b'\n' } else { c };
// Echo back to the user.
consputc(c);
// Store for consumption by consoleread().
*console.edit_byte() = c;
console.edit_index += 1;
if c == b'\n'
|| c == ctrl_x(b'D')
|| console.edit_index - console.read_index == INPUT_BUF_SIZE
{
// Wake up consoleread() if a whole line (or EOF) has arrived.
console.write_index = console.edit_index;
unsafe { wakeup(addr_of_mut!(console.read_index).cast()) };
}
}
}

View File

@ -1,67 +0,0 @@
use crate::sync::lock::Lock;
use core::ffi::{c_char, CStr};
pub static PRINT_LOCK: Lock = Lock::new();
/// Print out formatted text to the console.
/// Spins to acquire the lock.
macro_rules! print {
($($arg:tt)*) => {{
use core::fmt::Write;
let _guard = $crate::console::printf::PRINT_LOCK.lock_spinning();
let mut cons = $crate::console::cons.lock_spinning();
let _ = core::write!(cons.as_mut(), $($arg)*);
}};
}
pub(crate) use print;
macro_rules! println {
($($arg:tt)*) => {{
use $crate::console::printf::print;
print!($($arg)*);
print!("\n");
}};
}
pub(crate) use println;
/// Print out formatted text to the UART.
/// Does not use any locks.
macro_rules! uprint {
($($arg:tt)*) => {{
// use $crate::hardware::uart::{BufferedUart, Uart, UartWriter};
use core::fmt::Write;
// let mut uart: UartWriter = $crate::console::UART0.writer_unbuffered();
// let uart: &BufferedUart = &$crate::console::UART0;
// let uart: &Uart = &**uart;
// let mut uart: UartWriter = uart.writer();
//
let _ = core::write!($crate::console::UART0.writer_unbuffered(), $($arg)*);
// let _ = core::write!(uart, $($arg)*);
}};
}
pub(crate) use uprint;
macro_rules! uprintln {
($($arg:tt)*) => {{
use $crate::console::printf::uprint;
uprint!($($arg)*);
uprint!("\n");
}};
}
pub(crate) use uprintln;
#[no_mangle]
pub extern "C" fn printint(n: i32) {
print!("{}", n);
}
#[no_mangle]
pub unsafe extern "C" fn printstr(s: *const c_char) {
let s = CStr::from_ptr(s).to_str().unwrap_or_default();
print!("{}", s);
}

View File

@ -1,263 +0,0 @@
//! Support functions for system calls that involve file descriptors.
use super::inode::{iput, readi, stati, writei, Inode, InodeLockGuard};
use crate::{
fs::{log, stat::Stat},
hal::arch::virtual_memory::copyout,
io::pipe::Pipe,
proc::process::Process,
sync::mutex::Mutex,
};
use core::ptr::{addr_of_mut, null_mut};
#[repr(C)]
#[derive(Copy, Clone, PartialEq, Default)]
pub enum FileType {
#[default]
None,
Pipe,
Inode,
Device,
}
#[repr(C)]
#[derive(Copy, Clone)]
pub struct File {
pub kind: FileType,
/// Reference count.
pub references: i32,
pub readable: u8,
pub writable: u8,
/// FileType::Pipe
pub pipe: *mut Pipe,
/// FileType::Inode and FileType::Device
pub ip: *mut Inode,
/// FileType::Inode
pub off: u32,
/// FileType::Device
pub major: i16,
}
unsafe impl Send for File {}
impl File {
pub const fn uninitialized() -> File {
File {
kind: FileType::None,
references: 0,
readable: 0,
writable: 0,
pipe: null_mut(),
ip: null_mut(),
off: 0,
major: 0,
}
}
}
/// Map major device number to device functions.
#[repr(C)]
#[derive(Copy, Clone, Default)]
pub struct Devsw {
pub read: Option<fn(i32, u64, i32) -> i32>,
pub write: Option<fn(i32, u64, i32) -> i32>,
}
impl Devsw {
pub const fn new() -> Devsw {
Devsw {
read: None,
write: None,
}
}
}
#[no_mangle]
pub static mut devsw: [Devsw; crate::NDEV] = [Devsw::new(); crate::NDEV];
pub static FILES: Mutex<[File; crate::NFILE]> = Mutex::new([File::uninitialized(); crate::NFILE]);
pub const CONSOLE: usize = 1;
/// Allocate a file structure.
#[no_mangle]
pub unsafe extern "C" fn filealloc() -> *mut File {
let mut files = FILES.lock_spinning();
for file in files.as_mut() {
if file.references == 0 {
file.references = 1;
return addr_of_mut!(*file);
}
}
null_mut()
}
/// Increment reference count for file `file`.
pub unsafe fn filedup(file: *mut File) -> *mut File {
let _guard = FILES.lock_spinning();
if (*file).references < 1 {
panic!("filedup");
} else {
(*file).references += 1;
}
file
}
/// Close file `file`.
///
/// Decrement reference count, and close when reaching 0.
#[no_mangle]
pub unsafe extern "C" fn fileclose(file: *mut File) {
let guard = FILES.lock_spinning();
if (*file).references < 1 {
panic!("fileclose");
}
(*file).references -= 1;
if (*file).references == 0 {
let f = *file;
(*file).references = 0;
(*file).kind = FileType::None;
core::mem::drop(guard);
match f.kind {
FileType::Pipe => (*f.pipe).close(f.writable as i32),
FileType::Inode | FileType::Device => {
let _operation = log::LogOperation::new();
iput(f.ip);
}
FileType::None => {}
}
}
}
/// Get metadata about file `file`.
///
/// `addr` is a user virtual address, pointing to a Stat.
pub unsafe fn filestat(file: *mut File, addr: u64) -> i32 {
let proc = Process::current().unwrap();
let mut stat = Stat::default();
if (*file).kind == FileType::Inode || (*file).kind == FileType::Device {
{
let _guard = InodeLockGuard::new((*file).ip.as_mut().unwrap());
stati((*file).ip, addr_of_mut!(stat));
}
if copyout(
proc.pagetable,
addr as usize,
addr_of_mut!(stat).cast(),
core::mem::size_of::<Stat>(),
) < 0
{
return -1;
} else {
return 0;
}
}
-1
}
/// Read from file `file`.
///
/// `addr` is a user virtual address.
pub unsafe fn fileread(file: *mut File, addr: u64, num_bytes: i32) -> i32 {
if (*file).readable == 0 {
return -1;
}
match (*file).kind {
FileType::Pipe => (*(*file).pipe)
.read(addr, num_bytes as usize)
.map(|n| n as i32)
.unwrap_or(-1i32),
FileType::Device => {
if (*file).major < 0 || (*file).major >= crate::NDEV as i16 {
return -1;
}
let Some(read) = devsw[(*file).major as usize].read else {
return -1;
};
read(1, addr, num_bytes)
}
FileType::Inode => {
let _guard = InodeLockGuard::new((*file).ip.as_mut().unwrap());
let r = readi((*file).ip, 1, addr, (*file).off, num_bytes as u32);
if r > 0 {
(*file).off += r as u32;
}
r
}
_ => panic!("fileread"),
}
}
/// Write to file `file`.
///
/// `addr` is as user virtual address.
pub unsafe fn filewrite(file: *mut File, addr: u64, num_bytes: i32) -> i32 {
if (*file).writable == 0 {
return -1;
}
match (*file).kind {
FileType::Pipe => (*(*file).pipe)
.write(addr, num_bytes as usize)
.map(|n| n as i32)
.unwrap_or(-1i32),
FileType::Device => {
if (*file).major < 0 || (*file).major >= crate::NDEV as i16 {
return -1;
}
let Some(write) = devsw[(*file).major as usize].write else {
return -1;
};
write(1, addr, num_bytes)
}
FileType::Inode => {
// Write a few blocks at a time to avoid exceeding
// the maximum log transaction size, including
// inode, indirect block, allocation blocks,
// and 2 blocks of slop for non-aligned writes.
// This really belongs lower down, since writei()
// might be writing a device like the console.
let max = ((crate::MAXOPBLOCKS - 1 - 1 - 2) / 2) * super::BSIZE as usize;
let mut i = 0;
while i < num_bytes {
let mut n = num_bytes - i;
if n > max as i32 {
n = max as i32;
}
let r = {
let _operation = log::LogOperation::new();
let _guard = InodeLockGuard::new((*file).ip.as_mut().unwrap());
let r = writei((*file).ip, 1, addr + i as u64, (*file).off, n as u32);
if r > 0 {
(*file).off += r as u32;
}
r
};
if r != n {
// Error from writei.
break;
} else {
i += r;
}
}
if i == num_bytes {
num_bytes
} else {
-1
}
}
_ => panic!("filewrite"),
}
}

View File

@ -1,66 +0,0 @@
use super::stat::Stat;
use crate::sync::sleeplock::Sleeplock;
extern "C" {
pub fn iinit();
pub fn ialloc(dev: u32, kind: i16) -> *mut Inode;
pub fn iupdate(ip: *mut Inode);
pub fn idup(ip: *mut Inode) -> *mut Inode;
pub fn ilock(ip: *mut Inode);
pub fn iunlock(ip: *mut Inode);
pub fn iput(ip: *mut Inode);
pub fn iunlockput(ip: *mut Inode);
pub fn itrunc(ip: *mut Inode);
pub fn stati(ip: *mut Inode, st: *mut Stat);
pub fn readi(ip: *mut Inode, user_dst: i32, dst: u64, off: u32, n: u32) -> i32;
pub fn writei(ip: *mut Inode, user_src: i32, src: u64, off: u32, n: u32) -> i32;
pub fn namei(path: *mut u8) -> *mut Inode;
// pub fn namecmp()
}
#[repr(C)]
#[derive(Clone)]
pub struct Inode {
/// Device number.
pub device: u32,
/// Inode number.
pub inum: u32,
/// Reference count.
pub references: i32,
pub lock: Sleeplock,
/// Inode has been read from disk?
pub valid: i32,
// Copy of DiskInode
pub kind: i16,
pub major: i16,
pub minor: i16,
pub num_links: i16,
pub size: u32,
pub addresses: [u32; crate::fs::NDIRECT + 1],
}
impl Inode {
pub fn lock(&mut self) -> InodeLockGuard<'_> {
InodeLockGuard::new(self)
}
}
pub struct InodeLockGuard<'i> {
pub inode: &'i mut Inode,
}
impl<'i> InodeLockGuard<'i> {
pub fn new(inode: &mut Inode) -> InodeLockGuard<'_> {
unsafe {
ilock(inode as *mut Inode);
}
InodeLockGuard { inode }
}
}
impl<'i> core::ops::Drop for InodeLockGuard<'i> {
fn drop(&mut self) {
unsafe {
iunlock(self.inode as *mut Inode);
}
}
}

View File

@ -1,45 +0,0 @@
use crate::{fs::Superblock, io::buf::Buffer, sync::spinlock::Spinlock};
#[repr(C)]
pub struct LogHeader {
pub n: i32,
pub blocks: [i32; crate::LOGSIZE],
}
#[repr(C)]
pub struct Log {
lock: Spinlock,
start: i32,
size: i32,
/// How many FS syscalls are executing.
outstanding: i32,
/// In commit(), please wait.
committing: i32,
dev: i32,
header: LogHeader,
}
extern "C" {
pub static mut log: Log;
pub fn initlog(dev: i32, superblock: *mut Superblock);
pub fn begin_op();
pub fn end_op();
pub fn log_write(buffer: *mut Buffer);
}
#[derive(Default)]
pub struct LogOperation;
impl LogOperation {
pub fn new() -> LogOperation {
unsafe {
begin_op();
}
LogOperation
}
}
impl core::ops::Drop for LogOperation {
fn drop(&mut self) {
unsafe {
end_op();
}
}
}

View File

@ -1,89 +0,0 @@
//! On-disk file system format.
//! Both the kernel and user programs use this header file.
pub mod file;
pub mod inode;
pub mod log;
pub mod stat;
// Root inode
pub const ROOTINO: u64 = 1;
/// Block size.
pub const BSIZE: u32 = 1024;
// Disk layout:
// [ boot block | super block | log | inode blocks | free bit map | data blocks ]
//
// mkfs computes the super block and builds an initial file system.
// The super block describes the disk layout:
#[repr(C)]
pub struct Superblock {
/// Must be FSMAGIC.
pub magic: u32,
/// Size of file system image (blocks).
pub size: u32,
/// Number of data blocks.
pub nblocks: u32,
/// Number of inodes.
pub ninodes: u32,
/// Number of log blocks.
pub nlog: u32,
/// Block number of first log block.
pub logstart: u32,
/// Block number of first inode block.
pub inodestart: u32,
/// Block number of first free map block.
pub bmapstart: u32,
}
pub const FSMAGIC: u32 = 0x10203040;
pub const NDIRECT: usize = 12;
pub const NINDIRECT: usize = BSIZE as usize / core::mem::size_of::<u32>();
pub const MAXFILE: usize = NDIRECT + NINDIRECT;
// On-disk inode structure;
#[repr(C)]
pub struct DiskInode {
/// File type.
pub kind: i16,
/// Major device number (T_DEVICE only).
pub major: i16,
/// Minor device number (T_DEVICE only).
pub minor: i16,
/// Number of links to inode in file system.
pub nlink: i16,
/// Size of file (bytes).
pub size: u32,
/// Data block addresses.
pub addrs: [u32; NDIRECT + 1],
}
/// Inodes per block.
pub const IPB: u32 = BSIZE / core::mem::size_of::<DiskInode>() as u32;
/// Block containing inode i.
pub fn iblock(inode: u32, superblock: &Superblock) -> u32 {
inode / IPB + superblock.inodestart
}
/// Bitmap bits per block.
pub const BPB: u32 = BSIZE * 8;
/// Block of free map containing bit for block b.
pub fn bblock(block: u32, superblock: &Superblock) -> u32 {
block / BPB + superblock.bmapstart
}
/// Directory is a file containing a sequence of DirectoryEntry structures.
pub const DIRSIZ: usize = 14;
#[repr(C)]
pub struct DirectoryEntry {
pub inum: u16,
pub name: [u8; DIRSIZ],
}
pub static mut FS_INITIALIZED: bool = false;
extern "C" {
pub fn fsinit(dev: i32);
}

View File

@ -1,18 +0,0 @@
pub const KIND_DIR: i16 = 1;
pub const KIND_FILE: i16 = 2;
pub const KIND_DEVICE: i16 = 3;
#[repr(C)]
#[derive(Default)]
pub struct Stat {
/// FS's disk device.
pub device: i32,
/// Inode number.
pub inode: u32,
/// Type of file.
pub kind: i16,
/// Number of links to file.
pub num_links: i16,
/// Size of file in bytes.
pub size: u64,
}

View File

@ -1,55 +0,0 @@
#[cfg(target_arch = "riscv64")]
pub mod riscv;
pub mod trap;
pub mod cpu {
#[cfg(target_arch = "riscv64")]
pub use super::riscv::cpu::cpu_id;
}
pub mod interrupt {
#[cfg(target_arch = "riscv64")]
pub use crate::hal::{
arch::riscv::asm::{
intr_get as interrupts_enabled, intr_off as disable_interrupts,
intr_on as enable_interrupts,
},
hardware::riscv::plic::{
plic_claim as handle_interrupt, plic_complete as complete_interrupt, plicinit as init,
plicinithart as inithart,
},
};
}
pub mod mem {
#[cfg(target_arch = "riscv64")]
pub use super::riscv::{
asm::sfence_vma as flush_cached_pages,
mem::{
kstack, Pagetable, PAGE_SIZE, PHYSICAL_END, PTE_R, PTE_W, PTE_X, TRAMPOLINE, TRAPFRAME,
},
};
pub fn round_up_page(size: usize) -> usize {
(size + PAGE_SIZE - 1) & !(PAGE_SIZE - 1)
}
pub fn round_down_page(addr: usize) -> usize {
addr & !(PAGE_SIZE - 1)
}
}
pub mod virtual_memory {
#[cfg(target_arch = "riscv64")]
pub use super::riscv::virtual_memory::{
copyin, copyinstr, copyout, either_copyin, either_copyout, kvminit as init,
kvminithart as inithart, mappages, uvmalloc, uvmcopy, uvmcreate, uvmdealloc, uvmfirst,
uvmfree, uvmunmap,
};
}
pub mod clock {
#[cfg(target_arch = "riscv64")]
pub use super::riscv::trap::CLOCK_TICKS;
}

View File

@ -1,253 +0,0 @@
use super::*;
use core::arch::asm;
/// Which hart (core) is this?
#[inline(always)]
pub unsafe fn r_mhartid() -> u64 {
let x: u64;
asm!("csrr {}, mhartid", out(reg) x);
x
}
// Machine Status Register, mstatus
#[inline(always)]
pub unsafe fn r_mstatus() -> u64 {
let x: u64;
asm!("csrr {}, mstatus", out(reg) x);
x
}
#[inline(always)]
pub unsafe fn w_mstatus(x: u64) {
asm!("csrw mstatus, {}", in(reg) x);
}
// Machine Exception Program Counter
// MEPC holds the instruction address to which a return from exception will go.
#[inline(always)]
pub unsafe fn w_mepc(x: u64) {
asm!("csrw mepc, {}", in(reg) x);
}
// Supervisor Status Register, sstatus
#[inline(always)]
pub unsafe fn r_sstatus() -> u64 {
let x: u64;
asm!("csrr {}, sstatus", out(reg) x);
x
}
#[inline(always)]
pub unsafe fn w_sstatus(x: u64) {
asm!("csrw sstatus, {}", in(reg) x);
}
// Supervisor Interrupt Pending
#[inline(always)]
pub unsafe fn r_sip() -> u64 {
let x: u64;
asm!("csrr {}, sip", out(reg) x);
x
}
#[inline(always)]
pub unsafe fn w_sip(x: u64) {
asm!("csrw sip, {}", in(reg) x);
}
// Supervisor Interrupt Enable
#[inline(always)]
pub unsafe fn r_sie() -> u64 {
let x: u64;
asm!("csrr {}, sie", out(reg) x);
x
}
#[inline(always)]
pub unsafe fn w_sie(x: u64) {
asm!("csrw sie, {}", in(reg) x);
}
// Machine-mode Interrupt Enable
#[inline(always)]
pub unsafe fn r_mie() -> u64 {
let x: u64;
asm!("csrr {}, mie", out(reg) x);
x
}
#[inline(always)]
pub unsafe fn w_mie(x: u64) {
asm!("csrw mie, {}", in(reg) x);
}
// Supervisor Exception Program Counter
// SEPC holds the instruction address to which a return from exception will go.
#[inline(always)]
pub unsafe fn r_sepc() -> u64 {
let x: u64;
asm!("csrr {}, sepc", out(reg) x);
x
}
#[inline(always)]
pub unsafe fn w_sepc(x: u64) {
asm!("csrw sepc, {}", in(reg) x);
}
// Machine Exception Delegation
#[inline(always)]
pub unsafe fn r_medeleg() -> u64 {
let x: u64;
asm!("csrr {}, medeleg", out(reg) x);
x
}
#[inline(always)]
pub unsafe fn w_medeleg(x: u64) {
asm!("csrw medeleg, {}", in(reg) x);
}
// Machine Interrupt Delegation
#[inline(always)]
pub unsafe fn r_mideleg() -> u64 {
let x: u64;
asm!("csrr {}, mideleg", out(reg) x);
x
}
#[inline(always)]
pub unsafe fn w_mideleg(x: u64) {
asm!("csrw mideleg, {}", in(reg) x);
}
// Supervisor Trap-Vector Base Address
#[inline(always)]
pub unsafe fn r_stvec() -> u64 {
let x: u64;
asm!("csrr {}, stvec", out(reg) x);
x
}
#[inline(always)]
pub unsafe fn w_stvec(x: u64) {
asm!("csrw stvec, {}", in(reg) x);
}
// Machine-mode Interrupt Vector
#[inline(always)]
pub unsafe fn w_mtvec(x: u64) {
asm!("csrw mtvec, {}", in(reg) x);
}
// Physical Memory Protection
#[inline(always)]
pub unsafe fn w_pmpcfg0(x: u64) {
asm!("csrw pmpcfg0, {}", in(reg) x);
}
#[inline(always)]
pub unsafe fn w_pmpaddr0(x: u64) {
asm!("csrw pmpaddr0, {}", in(reg) x);
}
// Supervisor Address Translation and Protection
// SATP holds the address of the page table.
#[inline(always)]
pub unsafe fn r_satp() -> u64 {
let x: u64;
asm!("csrr {}, satp", out(reg) x);
x
}
#[inline(always)]
pub unsafe fn w_satp(x: u64) {
asm!("csrw satp, {}", in(reg) x);
}
#[inline(always)]
pub unsafe fn w_mscratch(x: u64) {
asm!("csrw mscratch, {}", in(reg) x);
}
// Supervisor Trap Cause
#[inline(always)]
pub unsafe fn r_scause() -> u64 {
let x: u64;
asm!("csrr {}, scause", out(reg) x);
x
}
// Supervisor Trap Value
#[inline(always)]
pub unsafe fn r_stval() -> u64 {
let x: u64;
asm!("csrr {}, stval", out(reg) x);
x
}
// Machine-mode Counter-Enable
#[inline(always)]
pub unsafe fn r_mcounteren() -> u64 {
let x: u64;
asm!("csrr {}, mcounteren", out(reg) x);
x
}
#[inline(always)]
pub unsafe fn w_mcounteren(x: u64) {
asm!("csrw mcounteren, {}", in(reg) x);
}
// Machine-mode cycle counter
#[inline(always)]
pub unsafe fn r_time() -> u64 {
let x: u64;
asm!("csrr {}, time", out(reg) x);
x
}
// Enable device interrupts
#[inline(always)]
pub unsafe fn intr_on() {
w_sstatus(r_sstatus() | SSTATUS_SIE);
}
// Disable device interrupts
#[inline(always)]
pub unsafe fn intr_off() {
w_sstatus(r_sstatus() & !SSTATUS_SIE);
}
// Are device interrupts enabled?
#[inline(always)]
pub unsafe fn intr_get() -> i32 {
if (r_sstatus() & SSTATUS_SIE) > 0 {
1
} else {
0
}
}
#[inline(always)]
pub unsafe fn r_sp() -> u64 {
let x: u64;
asm!("mv {}, sp", out(reg) x);
x
}
// Read and write TP (thread pointer), which xv6 uses
// to hold this core's hartid, the index into cpus[].
// pub fn rv_r_tp() -> u64;
#[inline(always)]
pub unsafe fn r_tp() -> u64 {
let x: u64;
asm!("mv {}, tp", out(reg) x);
x
}
#[inline(always)]
pub unsafe fn w_tp(x: u64) {
asm!("mv tp, {}", in(reg) x);
}
#[inline(always)]
pub unsafe fn r_ra() -> u64 {
let x: u64;
asm!("mv {}, ra", out(reg) x);
x
}
// Flush the Translation Look-aside Buffer (TLB).
#[inline(always)]
pub unsafe fn sfence_vma() {
// The "zero, zero" means flush all TLB entries.
asm!("sfence.vma zero, zero");
}

View File

@ -1,5 +0,0 @@
use super::asm::r_tp;
pub fn cpu_id() -> usize {
unsafe { r_tp() as usize }
}

View File

@ -1,91 +0,0 @@
// Physical memory layout
// QEMU -machine virt is setup like this,
// based on QEMU's hw/riscv/virt.c
//
// 00001000 - boot ROM, provided by qemu
// 02000000 - CLINT
// 0C000000 - PLIC
// 10000000 - uart0
// 10001000 - virtio disk
// 80000000 - boot ROM jumps here in machine mode (kernel loads the kernel here)
// unused after 8000000
// The kernel uses physical memory as so:
// 80000000 - entry.S, then kernel text and data
// end - start of kernel page allocation data
// PHYSTOP - end of RAM used by the kernel
pub type PagetableEntry = u64;
pub type Pagetable = *mut [PagetableEntry; 512];
/// The PagetableEntry is valid.
pub const PTE_V: i32 = 1 << 0;
/// The PagetableEntry is readable.
pub const PTE_R: i32 = 1 << 1;
/// The PagetableEntry is writable.
pub const PTE_W: i32 = 1 << 2;
/// The PagetableEntry is executable.
pub const PTE_X: i32 = 1 << 3;
/// The PagetableEntry is user-accessible.
pub const PTE_U: i32 = 1 << 4;
/// Page-based 39-bit virtual addressing.
/// Details at section 5.4 of the RISC-V specification.
pub const SATP_SV39: u64 = 8 << 60;
pub fn make_satp(pagetable: Pagetable) -> u64 {
SATP_SV39 | (pagetable as usize as u64 >> 12)
}
/// Bytes per page.
pub const PAGE_SIZE: usize = 4096;
/// Bits of offset within a page
const PAGE_OFFSET: usize = 12;
/// The kernel starts here.
pub const KERNEL_BASE: usize = 0x8000_0000;
/// The end of physical memory.
pub const PHYSICAL_END: usize = KERNEL_BASE + (128 * 1024 * 1024);
/// The maximum virtual address.
///
/// VIRTUAL_MAX is actually one bit less than the max allowed by
/// Sv39 to avoid having to sign-extend virtual addresses
/// that have the high bit set.
pub const VIRTUAL_MAX: usize = 1 << (9 + 9 + 9 + 12 - 1);
/// Map the trampoline page to the highest
/// address in both user and kernel space.
pub const TRAMPOLINE: usize = VIRTUAL_MAX - PAGE_SIZE;
/// Map kernel stacks beneath the trampoline,
/// each surrounded by invalid guard pages.
pub fn kstack(page: usize) -> usize {
TRAMPOLINE - (page + 1) * 2 * PAGE_SIZE
}
/// User memory layout.
/// Address zero first:
/// - text
/// - original data and bss
/// - fixed-size stack
/// - expandable heap
/// ...
/// - TRAPFRAME (p->trapframe, used by the trampoline)
/// - TRAMPOLINE (the same page as in the kernel)
pub const TRAPFRAME: usize = TRAMPOLINE - PAGE_SIZE;
// Convert a physical address to a PagetableEntry.
pub fn pa2pte(pa: usize) -> usize {
(pa >> 12) << 10
}
// Convert a PagetableEntry to a physical address.
pub fn pte2pa(pte: usize) -> usize {
(pte >> 10) << 12
}
// Extract the three 9-bit page table indices from a virtual address.
const PXMASK: usize = 0x1ffusize; // 9 bits.
fn pxshift(level: usize) -> usize {
PAGE_OFFSET + (level * 9)
}
pub fn px(level: usize, virtual_addr: usize) -> usize {
(virtual_addr >> pxshift(level)) & PXMASK
}

View File

@ -1,39 +0,0 @@
pub mod asm;
pub mod cpu;
pub mod mem;
pub mod start;
pub mod trap;
pub mod virtual_memory;
/// Previous mode
pub const MSTATUS_MPP_MASK: u64 = 3 << 11;
pub const MSTATUS_MPP_M: u64 = 3 << 11;
pub const MSTATUS_MPP_S: u64 = 1 << 11;
pub const MSTATUS_MPP_U: u64 = 0 << 11;
/// Machine-mode interrupt enable.
pub const MSTATUS_MIE: u64 = 1 << 3;
/// Previous mode: 1 = Supervisor, 0 = User
pub const SSTATUS_SPP: u64 = 1 << 8;
/// Supervisor Previous Interrupt Enable
pub const SSTATUS_SPIE: u64 = 1 << 5;
/// User Previous Interrupt Enable
pub const SSTATUS_UPIE: u64 = 1 << 4;
/// Supervisor Interrupt Enable
pub const SSTATUS_SIE: u64 = 1 << 1;
/// User Interrupt Enable
pub const SSTATUS_UIE: u64 = 1 << 0;
/// Supervisor External Interrupt Enable
pub const SIE_SEIE: u64 = 1 << 9;
/// Supervisor Timer Interrupt Enable
pub const SIE_STIE: u64 = 1 << 5;
/// Supervisor Software Interrupt Enable
pub const SIE_SSIE: u64 = 1 << 1;
/// Machine-mode External Interrupt Enable
pub const MIE_MEIE: u64 = 1 << 11;
/// Machine-mode Timer Interrupt Enable
pub const MIE_MTIE: u64 = 1 << 7;
/// Machine-mode Software Interrupt Enable
pub const MIE_MSIE: u64 = 1 << 3;

View File

@ -1,46 +0,0 @@
use crate::{
hal::{
arch::riscv::{asm, MSTATUS_MPP_MASK, MSTATUS_MPP_S, SIE_SEIE, SIE_SSIE, SIE_STIE},
hardware::riscv::clint,
},
main, NCPU,
};
use core::arch::asm;
#[no_mangle]
pub static mut stack0: [u8; 4096 * NCPU] = [0u8; 4096 * NCPU];
// entry.S jumps here in machine mode on stack0
#[no_mangle]
pub unsafe extern "C" fn start() {
// Set M Previous Privilege mode to Supervisor, for mret.
let mut x = asm::r_mstatus();
x &= !MSTATUS_MPP_MASK;
x |= MSTATUS_MPP_S;
asm::w_mstatus(x);
// Set M Exception Program Counter to main, for mret.
asm::w_mepc(main as usize as u64);
// Disable paging for now.
asm::w_satp(0);
// Delegate all interrupts and exceptions to supervisor mode.
asm::w_medeleg(0xffffu64);
asm::w_mideleg(0xffffu64);
asm::w_sie(asm::r_sie() | SIE_SEIE | SIE_STIE | SIE_SSIE);
// Configure Physical Memory Protection to give
// supervisor mode access to all of physical memory.
asm::w_pmpaddr0(0x3fffffffffffffu64);
asm::w_pmpcfg0(0xf);
// Ask for clock interrupts.
clint::timerinit();
// Keep each CPU's hartid in its tp register, for Cpu::current_id().
asm::w_tp(asm::r_mhartid());
// Switch to supervisor mode and jump to main().
asm!("mret");
}

View File

@ -1,251 +0,0 @@
use super::{asm, mem::make_satp, SSTATUS_SPIE, SSTATUS_SPP};
use crate::{
hal::{
arch::{
interrupt,
mem::{PAGE_SIZE, TRAMPOLINE},
},
platform::VIRTIO0_IRQ,
},
println,
proc::{
cpu::Cpu,
process::{Process, ProcessState},
scheduler::{r#yield, wakeup},
},
sync::mutex::Mutex,
syscall::syscall,
};
use core::ptr::addr_of;
extern "C" {
pub fn kernelvec();
// pub fn usertrap();
// pub fn usertrapret();
// fn syscall();
// pub fn userret(satp: u64);
fn virtio_disk_intr();
pub static mut trampoline: [u8; 0];
pub static mut uservec: [u8; 0];
pub static mut userret: [u8; 0];
}
pub static CLOCK_TICKS: Mutex<usize> = Mutex::new(0);
/// Set up to take exceptions and traps while in the kernel.
pub unsafe fn trapinithart() {
asm::w_stvec(kernelvec as usize as u64);
}
pub fn clockintr() {
let mut ticks = CLOCK_TICKS.lock_spinning();
*ticks += 1;
unsafe {
wakeup(addr_of!(CLOCK_TICKS).cast_mut().cast());
}
}
/// Check if it's an external interrupt or software interrupt and handle it.
///
/// Returns 2 if timer interrupt, 1 if other device, 0 if not recognized.
pub unsafe fn devintr() -> i32 {
let scause = asm::r_scause();
if (scause & 0x8000000000000000 > 0) && (scause & 0xff) == 9 {
// This is a supervisor external interrupt, via PLIC.
// IRQ indicates which device interrupted.
let irq = interrupt::handle_interrupt();
let mut uart_interrupt = false;
for (uart_irq, uart) in &crate::hal::platform::UARTS {
if irq == *uart_irq {
uart_interrupt = true;
uart.interrupt();
}
}
if !uart_interrupt {
if irq == VIRTIO0_IRQ {
virtio_disk_intr();
} else if irq > 0 {
println!("unexpected interrupt irq={}", irq);
}
}
// The PLIC allows each device to raise at most one
// interrupt at a time; tell the PLIC the device is
// now allowed to interrupt again.
if irq > 0 {
interrupt::complete_interrupt(irq);
}
1
} else if scause == 0x8000000000000001 {
// Software interrupt from a machine-mode timer interrupt,
// forwarded by timervec in kernelvec.S.
if Cpu::current_id() == 0 {
clockintr();
}
// Acknowledge the software interrupt by
// clearing the SSIP bit in sip.
asm::w_sip(asm::r_sip() & !2);
2
} else {
0
}
}
/// Return to user space
#[no_mangle]
pub unsafe extern "C" fn usertrapret() -> ! {
let proc = Process::current().unwrap();
// We're about to switch the destination of traps from
// kerneltrap() to usertrap(), so turn off interrupts until
// we're back in user space, where usertrap() is correct.
interrupt::disable_interrupts();
// Send syscalls, interrupts, and exceptions to uservec in trampoline.S
let trampoline_uservec =
TRAMPOLINE + (addr_of!(uservec) as usize) - (addr_of!(trampoline) as usize);
asm::w_stvec(trampoline_uservec as u64);
// Set up trapframe values that uservec will need when
// the process next traps into the kernel.
// kernel page table
(*proc.trapframe).kernel_satp = asm::r_satp();
// process's kernel stack
(*proc.trapframe).kernel_sp = proc.kernel_stack + PAGE_SIZE as u64;
(*proc.trapframe).kernel_trap = usertrap as usize as u64;
// hartid for Cpu::current_id()
(*proc.trapframe).kernel_hartid = asm::r_tp();
// Set up the registers that trampoline.S's
// sret will use to get to user space.
// Set S Previous Privelege mode to User.
let mut x = asm::r_sstatus();
// Clear SPP to 0 for user mode.
x &= !SSTATUS_SPP;
// Enable interrupts in user mode.
x |= SSTATUS_SPIE;
asm::w_sstatus(x);
// Set S Exception Program Counter to the saved user pc.
asm::w_sepc((*proc.trapframe).epc);
// Tell trampoline.S the user page table to switch to.
let satp = make_satp(proc.pagetable);
// Jump to userret in trampoline.S at the top of memory, which
// switches to the user page table, restores user registers,
// and switches to user mode with sret.
let trampoline_userret =
TRAMPOLINE + (addr_of!(userret) as usize) - (addr_of!(trampoline) as usize);
let trampoline_userret = trampoline_userret as *const ();
// Rust's most dangerous function: core::mem::transmute
let trampoline_userret = core::mem::transmute::<*const (), fn(u64) -> !>(trampoline_userret);
trampoline_userret(satp)
}
/// Interrupts and exceptions from kernel code go here via kernelvec,
/// on whatever the current kernel stack is.
#[no_mangle]
pub unsafe extern "C" fn kerneltrap() {
let sepc = asm::r_sepc();
let sstatus = asm::r_sstatus();
let scause = asm::r_scause();
if sstatus & SSTATUS_SPP == 0 {
panic!("kerneltrap: not from supervisor mode");
} else if interrupt::interrupts_enabled() != 0 {
panic!("kerneltrap: interrupts enabled");
}
let which_dev = devintr();
if which_dev == 0 {
println!(
"scause {}\nsepc={} stval={}",
scause,
asm::r_sepc(),
asm::r_stval()
);
panic!("kerneltrap");
} else if which_dev == 2
&& Process::current().is_some()
&& Process::current().unwrap().state == ProcessState::Running
{
// Give up the CPU if this is a timer interrupt.
r#yield();
}
// The yield() may have caused some traps to occur,
// so restore trap registers for use by kernelvec.S's sepc instruction.
asm::w_sepc(sepc);
asm::w_sstatus(sstatus);
}
/// Handle an interrupt, exception, or system call from userspace.
///
/// Called from trampoline.S
#[no_mangle]
pub unsafe extern "C" fn usertrap() {
if asm::r_sstatus() & SSTATUS_SPP != 0 {
panic!("usertrap: not from user mode");
}
// Send interrupts and exceptions to kerneltrap(),
// since we're now in the kernel.
asm::w_stvec(kernelvec as usize as u64);
let proc = Process::current().unwrap();
// Save user program counter.
(*proc.trapframe).epc = asm::r_sepc();
if asm::r_scause() == 8 {
// System call
if proc.is_killed() {
proc.exit(-1);
}
// sepc points to the ecall instruction, but
// we want to return to the next instruction.
(*proc.trapframe).epc += 4;
// An interrupt will change sepc, scause, and sstatus,
// so enable only now that we're done with those registers.
interrupt::enable_interrupts();
syscall();
}
let which_dev = devintr();
if asm::r_scause() != 8 && which_dev == 0 {
println!(
"usertrap(): unexpected scause {} {}\n\tsepc={} stval={}",
asm::r_scause(),
proc.pid,
asm::r_sepc(),
asm::r_stval()
);
proc.set_killed(true);
}
if proc.is_killed() {
proc.exit(-1);
}
// Give up the CPU if this is a timer interrupt.
if which_dev == 2 {
r#yield();
}
usertrapret();
}

View File

@ -1,616 +0,0 @@
use crate::{
hal::{
arch::{
mem::{flush_cached_pages, round_down_page, round_up_page},
riscv::{
asm,
mem::{
kstack, make_satp, pte2pa, Pagetable, PagetableEntry, KERNEL_BASE, PAGE_SIZE,
PHYSICAL_END, PTE_R, PTE_U, PTE_V, PTE_W, PTE_X, TRAMPOLINE, VIRTUAL_MAX,
},
},
},
hardware::riscv::plic::PLIC,
},
mem::{
kalloc::{kalloc, kfree},
memmove, memset,
},
proc::process::Process,
};
use core::ptr::{addr_of, addr_of_mut, null_mut};
extern "C" {
/// kernel.ld sets this to end of kernel code.
pub static etext: [u8; 0];
/// trampoline.S
pub static trampoline: [u8; 0];
// pub fn either_copyin(dst: *mut u8, user_src: i32, src: u64, len: u64) -> i32;
// pub fn either_copyout(user_dst: i32, dst: u64, src: *mut u8, len: u64) -> i32;
}
/// The kernel's pagetable.
pub static mut KERNEL_PAGETABLE: Pagetable = null_mut();
/// Make a direct-map page table for the kernel.
pub unsafe fn kvmmake() -> Pagetable {
let pagetable = kalloc() as Pagetable;
if pagetable.is_null() {
panic!("kalloc");
}
memset(pagetable.cast(), 0, PAGE_SIZE);
for page in &crate::hal::platform::DIRECT_MAPPED_PAGES {
kvmmap(pagetable, *page, *page, PAGE_SIZE, PTE_R | PTE_W);
}
// UART registers
for (_, uart) in &crate::hal::platform::UARTS {
kvmmap(
pagetable,
uart.base_address,
uart.base_address,
PAGE_SIZE,
PTE_R | PTE_W,
);
}
// VirtIO MMIO disk interfaces
for (_, virtio_disk_addr) in &crate::hal::platform::VIRTIO_DISKS {
kvmmap(
pagetable,
*virtio_disk_addr,
*virtio_disk_addr,
PAGE_SIZE,
PTE_R | PTE_W,
);
}
// PLIC
kvmmap(pagetable, PLIC, PLIC, 0x400000, PTE_R | PTE_W);
let etext_addr = addr_of!(etext) as usize;
// Map kernel text executable and read-only.
kvmmap(
pagetable,
KERNEL_BASE,
KERNEL_BASE,
etext_addr - KERNEL_BASE,
PTE_R | PTE_X,
);
// Map kernel data and the physical RAM we'll make use of.
kvmmap(
pagetable,
etext_addr,
etext_addr,
PHYSICAL_END - etext_addr,
PTE_R | PTE_W,
);
// Map the trampoine for trap entry/exit to
// the highest virtual address in the kernel.
kvmmap(
pagetable,
TRAMPOLINE,
addr_of!(trampoline) as usize,
PAGE_SIZE,
PTE_R | PTE_X,
);
// Allocate and map a kernel stack for each process.
for i in 0..crate::NPROC {
let page = kalloc();
if page.is_null() {
panic!("kalloc");
}
let virtual_addr = kstack(i);
kvmmap(
pagetable,
virtual_addr,
page as usize,
PAGE_SIZE,
PTE_R | PTE_W,
);
}
pagetable
}
/// Initialize the one kernel_pagetable.
pub unsafe fn kvminit() {
KERNEL_PAGETABLE = kvmmake();
}
/// Switch hardware pagetable register to the kernel's pagetable and enable paging.
pub unsafe fn kvminithart() {
// Wait for any previous writes to the pagetable memory to finish.
flush_cached_pages();
asm::w_satp(make_satp(KERNEL_PAGETABLE));
// Flush stale entries from the TLB.
flush_cached_pages();
}
/// Return the address of the PTE in pagetable
/// `pagetable` that corresponds to virtual address
/// `virtual_addr`. If `alloc` != 0, create any
/// required pagetable pages.
///
/// The RISC-V Sv39 scheme has three levels of pagetable
/// pages. A pagetable page contains 512 64-bit PTEs.
///
/// A 64-bit virtual address is split into five fields:
/// - 0..12: 12 bits of byte offset within the page.
/// - 12..20: 9 bits of level 0 index.
/// - 21..30: 9 bits of level 0 index.
/// - 30..39: 9 bits of level 0 index.
/// - 39..64: Must be zero.
pub unsafe fn walk(
mut pagetable: Pagetable,
virtual_addr: usize,
alloc: bool,
) -> *mut PagetableEntry {
if virtual_addr > VIRTUAL_MAX {
panic!("walk");
}
let mut level = 2;
while level > 0 {
let pte =
addr_of_mut!(pagetable.as_mut().unwrap()[(virtual_addr >> (12 + (level * 9))) & 0x1ff]);
if (*pte) & PTE_V as u64 > 0 {
pagetable = (((*pte) >> 10) << 12) as usize as Pagetable;
} else {
if !alloc {
return null_mut();
}
pagetable = kalloc() as Pagetable;
if pagetable.is_null() {
return null_mut();
}
memset(pagetable.cast(), 0, PAGE_SIZE);
*pte = (((pagetable as usize) >> 12) << 10) as PagetableEntry | PTE_V as u64;
}
level -= 1;
}
addr_of_mut!(pagetable.as_mut().unwrap()[(virtual_addr >> 12) & 0x1ff])
}
/// Look up a virtual address and return the physical address or 0 if not mapped.
///
/// Can only be used to look up user pages.
#[no_mangle]
pub unsafe extern "C" fn walkaddr(pagetable: Pagetable, virtual_addr: usize) -> u64 {
if virtual_addr > VIRTUAL_MAX {
return 0;
}
let pte = walk(pagetable, virtual_addr, false);
if pte.is_null() || *pte & PTE_V as u64 == 0 || *pte & PTE_U as u64 == 0 {
return 0;
}
pte2pa(*pte as usize) as u64
}
/// Add a mapping to the kernel page table.
///
/// Only used when booting.
/// Does not flush TLB or enable paging.
pub unsafe fn kvmmap(
pagetable: Pagetable,
virtual_addr: usize,
physical_addr: usize,
size: usize,
perm: i32,
) {
if mappages(pagetable, virtual_addr, size, physical_addr, perm) != 0 {
panic!("kvmmap");
}
}
/// Create PagetableEntries for virtual addresses starting at `virtual_addr`
/// that refer to physical addresses starting at `physical_addr`.
///
/// `virtual_addr` and size might not be page-aligned.
/// Returns 0 on success, -1 if walk() couldn't allocate a needed pagetable page.
pub unsafe fn mappages(
pagetable: Pagetable,
virtual_addr: usize,
size: usize,
mut physical_addr: usize,
perm: i32,
) -> i32 {
if size == 0 {
panic!("mappages: size = 0");
}
let mut a = round_down_page(virtual_addr);
let last = round_down_page(virtual_addr + size - 1);
loop {
let pte = walk(pagetable, a, true);
if pte.is_null() {
return -1;
}
if (*pte) & PTE_V as u64 > 0 {
panic!("mappages: remap");
}
*pte = ((physical_addr as u64 >> 12) << 10) | perm as u64 | PTE_V as u64;
if a == last {
break;
} else {
a += PAGE_SIZE;
physical_addr += PAGE_SIZE;
}
}
0
}
/// Remove `npages` of mappings starting from `virtual_addr`.
///
/// `virtual_addr` amust be page-aligned. The mappings must exist.
/// Optionally free the physical memory.
pub unsafe fn uvmunmap(pagetable: Pagetable, virtual_addr: usize, num_pages: usize, free: bool) {
if virtual_addr % PAGE_SIZE != 0 {
panic!("uvmunmap: not aligned");
}
let mut a = virtual_addr;
while a < virtual_addr + num_pages * PAGE_SIZE {
let pte = walk(pagetable, a, false);
if pte.is_null() {
panic!("uvmunmap: walk");
} else if (*pte) & PTE_V as u64 == 0 {
panic!("uvmunmap: not mapped");
} else if ((*pte) & 0x3ffu64) == PTE_V as u64 {
panic!("uvmunmap: not a leaf");
} else if free {
let physical_addr = (((*pte) >> 10) << 12) as usize as *mut u8;
kfree(physical_addr.cast());
}
*pte = 0;
a += PAGE_SIZE;
}
}
/// Create an empty user pagetable.
///
/// Returns 0 if out of memory.
pub unsafe fn uvmcreate() -> Pagetable {
let pagetable = kalloc() as Pagetable;
if pagetable.is_null() {
return null_mut();
}
memset(pagetable.cast(), 0, PAGE_SIZE);
pagetable
}
/// Load the user initcode into address 0 of pagetable for the very first process.
///
/// `size` must be less than `PAGE_SIZE`.
pub unsafe fn uvmfirst(pagetable: Pagetable, src: *mut u8, size: usize) {
if size >= PAGE_SIZE {
panic!("uvmfirst: more than a page");
}
let mem = kalloc();
memset(mem, 0, PAGE_SIZE);
mappages(
pagetable,
0,
PAGE_SIZE,
mem as usize,
PTE_W | PTE_R | PTE_X | PTE_U,
);
memmove(mem, src, size as u32);
}
/// Allocate PagetableEntries and physical memory to grow process
/// from `old_size` to `new_size`, which need not be page aligned.
///
/// Returns new size or 0 on error.
#[no_mangle]
pub unsafe extern "C" fn uvmalloc(
pagetable: Pagetable,
mut old_size: usize,
new_size: usize,
xperm: i32,
) -> u64 {
if new_size < old_size {
return old_size as u64;
}
old_size = round_up_page(old_size);
let mut a = old_size;
while a < new_size {
let mem = kalloc();
if mem.is_null() {
uvmdealloc(pagetable, a, old_size);
return 0;
}
memset(mem.cast(), 0, PAGE_SIZE);
if mappages(pagetable, a, PAGE_SIZE, mem as usize, PTE_R | PTE_U | xperm) != 0 {
kfree(mem.cast());
uvmdealloc(pagetable, a, old_size);
return 0;
}
a += PAGE_SIZE;
}
new_size as u64
}
/// Deallocate user pages to bring the process size from `old_size` to `new_size`.
///
/// `old_size` and `new_size` need not be page-aligned, nor does `new_size` need
/// to be less than `old_size`. `old_size` can be larget than the actual process
/// size. Returns the new process size.
#[no_mangle]
pub unsafe extern "C" fn uvmdealloc(pagetable: Pagetable, old_size: usize, new_size: usize) -> u64 {
if new_size >= old_size {
return old_size as u64;
}
if round_up_page(new_size) < round_up_page(old_size) {
let num_pages = (round_up_page(old_size) - round_up_page(new_size)) / PAGE_SIZE;
uvmunmap(pagetable, round_up_page(new_size), num_pages, true);
}
new_size as u64
}
/// Recursively free pagetable pages.
///
/// All leaf mappings must have already been removed.
pub unsafe fn freewalk(pagetable: Pagetable) {
// There are 2^9 = 512 PagetableEntry's in a Pagetable.
for i in 0..512 {
let pte: &mut PagetableEntry = &mut pagetable.as_mut().unwrap()[i];
if *pte & PTE_V as u64 > 0 && (*pte & (PTE_R | PTE_W | PTE_X) as u64) == 0 {
// This PagetableEntry points to a lower-level pagetable.
let child = ((*pte) >> 10) << 12;
freewalk(child as usize as Pagetable);
*pte = 0;
} else if *pte & PTE_V as u64 > 0 {
panic!("freewalk: leaf");
}
}
kfree(pagetable.cast());
}
/// Free user memory pages, then free pagetable pages.
pub unsafe fn uvmfree(pagetable: Pagetable, size: usize) {
uvmunmap(pagetable, 0, round_up_page(size) / PAGE_SIZE, true);
freewalk(pagetable);
}
/// Given a parent process's pagetable, copy
/// its memory into a child's pagetable.
///
/// Copies both the pagetable and the physical memory.
/// Returns 0 on success, -1 on failure.
/// Frees any allocated pages on failure.
pub unsafe fn uvmcopy(old: Pagetable, new: Pagetable, size: usize) -> i32 {
let mut i = 0;
while i < size {
let pte = walk(old, i, false);
if pte.is_null() {
panic!("uvmcopy: PagetableEntry should exist");
} else if (*pte) & PTE_V as u64 == 0 {
panic!("uvmcopy: page not present");
}
let pa = ((*pte) >> 10) << 12;
let flags = (*pte) & 0x3ffu64;
let mem = kalloc();
if mem.is_null() {
uvmunmap(new, 0, i / PAGE_SIZE, true);
return -1;
}
memmove(
mem.cast(),
(pa as usize as *mut u8).cast(),
PAGE_SIZE as u64 as u32,
);
if mappages(new, i, PAGE_SIZE, mem as usize, flags as i32) != 0 {
kfree(mem.cast());
uvmunmap(new, 0, i / PAGE_SIZE, true);
return -1;
}
i += PAGE_SIZE;
}
0
}
/// Mark a PagetableEntry invalid for user access.
///
/// Used by exec for the user stack guard page.
#[no_mangle]
pub unsafe extern "C" fn uvmclear(pagetable: Pagetable, virtual_addr: usize) {
let pte = walk(pagetable, virtual_addr, false);
if pte.is_null() {
panic!("uvmclear");
}
*pte &= !(PTE_U as u64);
}
/// Copy from kernel to user.
///
/// Copy `len` bytes from `src` to virtual address `dst_virtual_addr` in a given pagetable.
/// Returns 0 on success, -1 on error.
#[no_mangle]
pub unsafe extern "C" fn copyout(
pagetable: Pagetable,
mut dst_virtual_addr: usize,
mut src: *mut u8,
mut len: usize,
) -> i32 {
while len > 0 {
let va0 = round_down_page(dst_virtual_addr);
let pa0 = walkaddr(pagetable, va0) as usize;
if pa0 == 0 {
return -1;
}
let mut n = PAGE_SIZE - (dst_virtual_addr - va0);
if n > len {
n = len;
}
memmove(
((pa0 + dst_virtual_addr - va0) as *mut u8).cast(),
src,
n as u32,
);
len -= n;
src = src.add(n);
dst_virtual_addr = va0 + PAGE_SIZE;
}
0
}
/// Copy from user to kernel.
///
/// Copy `len` bytes to `dst` from virtual address `src_virtual_addr` in a given pagetable.
/// Returns 0 on success, -1 on error.
#[no_mangle]
pub unsafe extern "C" fn copyin(
pagetable: Pagetable,
mut dst: *mut u8,
mut src_virtual_addr: usize,
mut len: usize,
) -> i32 {
while len > 0 {
let va0 = round_down_page(src_virtual_addr);
let pa0 = walkaddr(pagetable, va0) as usize;
if pa0 == 0 {
return -1;
}
let mut n = PAGE_SIZE - (src_virtual_addr - va0);
if n > len {
n = len;
}
memmove(
dst.cast(),
((pa0 + src_virtual_addr - va0) as *mut u8).cast(),
n as u32,
);
len -= n;
dst = dst.add(n);
src_virtual_addr = va0 + PAGE_SIZE;
}
0
}
// Copy to either a user address, or kernel address,
// depending on usr_dst.
// Returns 0 on success, -1 on error.
#[no_mangle]
pub unsafe extern "C" fn either_copyout(
user_dst: i32,
dst: usize,
src: *mut u8,
len: usize,
) -> i32 {
let p = Process::current().unwrap();
if user_dst > 0 {
copyout(p.pagetable, dst, src, len)
} else {
memmove(dst as *mut u8, src, len as u32);
0
}
}
// Copy from either a user address, or kernel address,
// depending on usr_src.
// Returns 0 on success, -1 on error.
#[no_mangle]
pub unsafe extern "C" fn either_copyin(dst: *mut u8, user_src: i32, src: usize, len: usize) -> i32 {
let p = Process::current().unwrap();
if user_src > 0 {
copyin(p.pagetable, dst, src, len)
} else {
memmove(dst, src as *mut u8, len as u32);
0
}
}
/// Copy a null-terminated string from user to kernel.
///
/// Copy bytes to `dst` from virtual address `src_virtual_addr`
/// in a given pagetable, until b'\0' or `max` is reached.
/// Returns 0 on success, -1 on error.
pub unsafe fn copyinstr(
pagetable: Pagetable,
mut dst: *mut u8,
mut src_virtual_addr: usize,
mut max: usize,
) -> i32 {
let mut got_null = false;
while !got_null && max > 0 {
let va0 = round_down_page(src_virtual_addr);
let pa0 = walkaddr(pagetable, va0) as usize;
if pa0 == 0 {
return -1;
}
let mut n = PAGE_SIZE - (src_virtual_addr - va0);
if n > max {
n = max;
}
let mut p = (pa0 + src_virtual_addr - va0) as *const u8;
while n > 0 {
if *p == b'\0' {
*dst = b'\0';
got_null = true;
break;
} else {
*dst = *p;
}
n -= 1;
max -= 1;
p = p.add(1);
dst = dst.add(1);
}
src_virtual_addr = va0 + PAGE_SIZE;
}
if got_null {
0
} else {
-1
}
}

View File

@ -1,75 +0,0 @@
//! Architecture-agnostic trap handling.
#[cfg(target_arch = "riscv64")]
pub use super::riscv::trap::{trapinithart as inithart, usertrapret};
use super::interrupt;
use crate::proc::cpu::Cpu;
#[derive(Default)]
pub struct InterruptBlocker;
impl InterruptBlocker {
pub fn new() -> InterruptBlocker {
unsafe {
let interrupts_before = interrupt::interrupts_enabled();
let cpu = Cpu::current();
interrupt::disable_interrupts();
if cpu.interrupt_disable_layers == 0 {
cpu.previous_interrupts_enabled = interrupts_before;
}
cpu.interrupt_disable_layers += 1;
// crate::sync::spinlock::push_off();
}
InterruptBlocker
}
}
impl core::ops::Drop for InterruptBlocker {
fn drop(&mut self) {
unsafe {
let cpu = Cpu::current();
if interrupt::interrupts_enabled() == 1 || cpu.interrupt_disable_layers < 1 {
// panic!("pop_off mismatched");
return;
}
cpu.interrupt_disable_layers -= 1;
if cpu.interrupt_disable_layers == 0 && cpu.previous_interrupts_enabled == 1 {
interrupt::enable_interrupts();
}
// crate::sync::spinlock::pop_off();
}
}
}
impl !Send for InterruptBlocker {}
pub unsafe fn push_intr_off() {
let old = interrupt::interrupts_enabled();
let cpu = Cpu::current();
interrupt::disable_interrupts();
if cpu.interrupt_disable_layers == 0 {
cpu.previous_interrupts_enabled = old;
}
cpu.interrupt_disable_layers += 1;
}
pub unsafe fn pop_intr_off() {
let cpu = Cpu::current();
if interrupt::interrupts_enabled() == 1 {
// crate::panic_byte(b'0');
panic!("pop_intr_off - interruptible");
} else if cpu.interrupt_disable_layers < 1 {
// crate::panic_byte(b'1');
panic!("pop_intr_off");
}
cpu.interrupt_disable_layers -= 1;
if cpu.interrupt_disable_layers == 0 && cpu.previous_interrupts_enabled == 1 {
interrupt::enable_interrupts();
}
}

View File

@ -1,8 +0,0 @@
//! Device drivers and hardware implementations.
pub mod ramdisk;
pub mod uart;
pub mod virtio_disk;
#[cfg(target_arch = "riscv64")]
pub mod riscv;

View File

@ -1,8 +0,0 @@
//! Ramdisk that uses the disk image loaded by qemu -initrd fs.img
use crate::io::buf::Buffer;
extern "C" {
pub fn ramdiskinit();
pub fn ramdiskrw(buffer: *mut Buffer);
}

View File

@ -1,55 +0,0 @@
use crate::{
hal::arch::riscv::{asm, MIE_MTIE, MSTATUS_MIE},
NCPU,
};
use core::ptr::addr_of;
// Core Local Interrupter (CLINT), which contains the timer.
// I'm pretty sure the CLINT address is standardized to this location.
pub const CLINT: usize = 0x2000000;
const CLINT_MTIME: usize = CLINT + 0xbff8;
extern "C" {
pub fn timervec();
}
#[no_mangle]
pub static mut timer_scratch: [[u64; 5]; NCPU] = [[0u64; 5]; NCPU];
fn clint_mtimecmp(hartid: usize) -> *mut u64 {
(CLINT + 0x4000 + (8 * hartid)) as *mut u64
}
/// Arrange to receive timer interrupts.
///
/// They will arrive in machine mode at
/// at timervec in kernelvec.S,
/// which turns them into software interrupts for
/// devintr() in trap.c.
pub unsafe fn timerinit() {
// Each CPU has a separate source of timer interrupts.
let id = asm::r_mhartid() as usize;
// Ask the CLINT for a timer interrupt.
// cycles, about 1/10th second in qemu
let interval = 1_000_000u64;
*clint_mtimecmp(id) = *(CLINT_MTIME as *const u64) + interval;
// Prepare information in scratch[] for timervec.
// scratch[0..=2]: Space for timervec to save registers.
// scratch[3]: Address of CLINT MTIMECMP register.
// scratch[4]: Desired interval (in cycles) between timer interrupts.
let scratch: &mut [u64; 5] = &mut timer_scratch[id];
scratch[3] = clint_mtimecmp(id) as usize as u64;
scratch[4] = interval;
asm::w_mscratch(addr_of!(scratch[0]) as usize as u64);
// Set the machine-mode trap handler.
asm::w_mtvec(timervec as usize as u64);
// Enable machine-mode interrupts.
asm::w_mstatus(asm::r_mstatus() | MSTATUS_MIE);
// Enable machine-mode timer interrupts.
asm::w_mie(asm::r_mie() | MIE_MTIE);
}

View File

@ -1,2 +0,0 @@
pub mod clint;
pub mod plic;

View File

@ -1,77 +0,0 @@
//! The RISC-V Platform Level Interrupt Controller (PLIC)
use crate::hal::platform::VIRTIO0_IRQ;
use crate::proc::cpu::Cpu;
// (VIRTIO0_IRQ, VIRTIO0_IRQ_ADDR)
const VIRTIO0_IRQ_ADDR: usize = PLIC + VIRTIO0_IRQ * 4;
pub use crate::hal::platform::PLIC_BASE_ADDR as PLIC;
const PLIC_PRIORITY: usize = PLIC;
const PLIC_PENDING: usize = PLIC + 0x1000;
/// Get a pointer to the CPU-specific machine-mode enable register.
fn plic_menable(hartid: usize) -> *mut u32 {
(PLIC + 0x2000 + (0x100 * hartid)) as *mut u32
}
/// Get a pointer to the CPU-specific supervisor-mode enable register.
fn plic_senable(hartid: usize) -> *mut u32 {
(PLIC + 0x2080 + (0x100 * hartid)) as *mut u32
}
/// Get a pointer to the CPU-specific machine-mode priority register.
fn plic_mpriority(hartid: usize) -> *mut u32 {
(PLIC + 0x200000 + (0x2000 * hartid)) as *mut u32
}
/// Get a pointer to the CPU-specific supervisor-mode priority register.
fn plic_spriority(hartid: usize) -> *mut u32 {
(PLIC + 0x201000 + (0x2000 * hartid)) as *mut u32
}
/// Get a pointer to the CPU-specific machine-mode claim register.
fn plic_mclaim(hartid: usize) -> *mut u32 {
(PLIC + 0x200004 + (0x2000 * hartid)) as *mut u32
}
/// Get a pointer to the CPU-specific supervisor-mode claim register.
fn plic_sclaim(hartid: usize) -> *mut u32 {
(PLIC + 0x201004 + (0x2000 * hartid)) as *mut u32
}
pub unsafe fn plicinit() {
// Set desired IRQ priorities non-zero (otherwise disabled).
for (uart_irq, _) in &crate::hal::platform::UARTS {
*((PLIC + uart_irq * 4) as *mut u32) = 1;
}
for (virtio_disk_irq, _) in &crate::hal::platform::VIRTIO_DISKS {
*((PLIC + virtio_disk_irq * 4) as *mut u32) = 1;
}
}
pub unsafe fn plicinithart() {
let hart = Cpu::current_id();
// Set enable bits for this hart's S-mode
// for the UART and VIRTIO disk.
let mut enable_bits = 0;
for (uart_irq, _) in &crate::hal::platform::UARTS {
enable_bits |= 1 << uart_irq;
}
// for (virtio_disk_irq, _) in &crate::hal::platform::VIRTIO_DISKS {
// enable_bits |= 1 << virtio_disk_irq;
// }
enable_bits |= 1 << VIRTIO0_IRQ;
*plic_senable(hart) = enable_bits;
// Set this hart's S-mode priority threshold to 0.
*plic_spriority(hart) = 0;
}
/// Ask the PLIC what interrupt we should serve.
pub unsafe fn plic_claim() -> usize {
let hart = Cpu::current_id();
(*plic_sclaim(hart)) as usize
}
/// Tell the PLIC we've served this IRQ.
pub unsafe fn plic_complete(irq: usize) {
let hart = Cpu::current_id();
*plic_sclaim(hart) = irq as u32;
}

View File

@ -1,252 +0,0 @@
//! Low-level driver routines for 16550a UART.
#![allow(non_upper_case_globals)]
use crate::{
console::consoleintr,
hal::arch::trap::InterruptBlocker,
proc::scheduler::wakeup,
queue::Queue,
sync::mutex::{Mutex, MutexGuard},
};
use core::ptr::addr_of;
// The UART control registers.
// Some have different meanings for read vs write.
// See http://byterunner.com/16550.html
/// Interrupt Enable Register
const IER_RX_ENABLE: u8 = 1 << 0;
const IER_TX_ENABLE: u8 = 1 << 1;
const FCR_FIFO_ENABLE: u8 = 1 << 0;
/// Clear the content of the two FIFOs.
const FCR_FIFO_CLEAR: u8 = 3 << 1;
const LCR_EIGHT_BITS: u8 = 3;
/// Special mode to set baud rate
const LCR_BAUD_LATCH: u8 = 1 << 7;
/// Input is waiting to be read from RHR
const LSR_RX_READY: u8 = 1 << 0;
/// THR can accept another character to send
const LSR_TX_IDLE: u8 = 1 << 5;
enum Register {
ReceiveHolding,
TransmitHolding,
InterruptEnable,
FIFOControl,
InterruptStatus,
LineControl,
LineStatus,
}
impl Register {
pub fn as_offset(&self) -> usize {
match self {
Register::ReceiveHolding => 0,
Register::TransmitHolding => 0,
Register::InterruptEnable => 1,
Register::FIFOControl => 2,
Register::InterruptStatus => 2,
Register::LineControl => 2,
Register::LineStatus => 5,
}
}
pub fn as_ptr(&self, base_address: usize) -> *mut u8 {
(base_address + self.as_offset()) as *mut u8
}
pub fn read(&self, base_address: usize) -> u8 {
unsafe { self.as_ptr(base_address).read_volatile() }
}
pub fn write(&self, base_address: usize, value: u8) {
unsafe { self.as_ptr(base_address).write_volatile(value) }
}
}
pub struct Uart {
pub base_address: usize,
}
impl Uart {
pub const fn new(base_address: usize) -> Uart {
Uart { base_address }
}
/// Initialize the UART.
pub unsafe fn initialize(&self) {
// Disable interrupts.
Register::InterruptEnable.write(self.base_address, 0x00);
// Special mode to set baud rate.
Register::LineControl.write(self.base_address, LCR_BAUD_LATCH);
// LSB for baud rate of 38.4K.
*(self.base_address as *mut u8) = 0x03;
// MSB for baud rate of 38.4K.
*((self.base_address + 1) as *mut u8) = 0x00;
// Leave set-baud mode and set
// word length to 8 bits, no parity.
Register::LineControl.write(self.base_address, LCR_EIGHT_BITS);
// Reset and enable FIFOs.
Register::FIFOControl.write(self.base_address, FCR_FIFO_ENABLE | FCR_FIFO_CLEAR);
// Enable transmit and receive interrupts.
Register::InterruptEnable.write(self.base_address, IER_TX_ENABLE | IER_RX_ENABLE);
}
/// Handle an interrupt from the hardware.
pub fn interrupt(&self) {
// Read and process incoming data.
while let Some(b) = self.read_byte() {
consoleintr(b);
}
}
/// Read one byte from the UART.
pub fn read_byte(&self) -> Option<u8> {
if Register::LineStatus.read(self.base_address) & 0x01 != 0 {
// Input data is ready.
Some(Register::ReceiveHolding.read(self.base_address))
} else {
None
}
}
pub fn writer(&self) -> UartWriter<'_> {
UartWriter(self)
}
pub fn can_write_byte(&self) -> bool {
Register::LineStatus.read(self.base_address) & LSR_TX_IDLE != 0
}
/// Attempt to write one byte to the UART.
/// Returns a bool representing whether the byte was written.
pub fn write_byte(&self, byte: u8) -> bool {
// Block interrupts to prevent TOCTOU manipulation.
let _ = InterruptBlocker::new();
if self.can_write_byte() {
Register::TransmitHolding.write(self.base_address, byte);
true
} else {
false
}
}
pub fn write_byte_blocking(&self, byte: u8) {
while !self.write_byte(byte) {
core::hint::spin_loop();
}
}
pub fn write_slice_blocking(&self, bytes: &[u8]) {
for b in bytes {
self.write_byte(*b);
}
}
}
impl From<BufferedUart> for Uart {
fn from(value: BufferedUart) -> Self {
value.inner
}
}
#[derive(Copy, Clone)]
pub struct UartWriter<'u>(&'u Uart);
impl<'u> core::fmt::Write for UartWriter<'u> {
fn write_str(&mut self, s: &str) -> core::fmt::Result {
self.0.write_slice_blocking(s.as_bytes());
core::fmt::Result::Ok(())
}
}
pub struct BufferedUart {
inner: Uart,
buffer: Mutex<Queue<u8>>,
}
impl BufferedUart {
pub const fn new(base_address: usize) -> BufferedUart {
BufferedUart {
inner: Uart::new(base_address),
buffer: Mutex::new(Queue::new()),
}
}
pub fn interrupt(&self) {
let _ = InterruptBlocker::new();
self.inner.interrupt();
// Send buffered characters.
let buf = self.buffer.lock_spinning();
self.send_buffered_bytes(buf);
}
pub fn writer(&self) -> BufferedUartWriter<'_> {
BufferedUartWriter(self)
}
pub fn writer_unbuffered(&self) -> UartWriter<'_> {
self.inner.writer()
}
/// Write a byte to the UART and buffer it.
/// Should not be used in interrupts.
pub fn write_byte_buffered(&self, byte: u8) {
let mut buf = self.buffer.lock_spinning();
// Sleep until there is space in the buffer.
while buf.space_remaining() == 0 {
unsafe {
buf.sleep(addr_of!(*self).cast_mut().cast());
}
}
// Add the byte onto the end of the queue.
buf.push_back(byte).expect("space in the uart queue");
// Drop buf so that send_buffered_bytes() can lock it again.
self.send_buffered_bytes(buf);
}
/// Write a slice to the UART and buffer it.
/// Should not be used in interrupts.
pub fn write_slice_buffered(&self, bytes: &[u8]) {
for b in bytes {
self.write_byte_buffered(*b);
}
}
/// If the UART is idle and a character is
/// waiting in the transmit buffer, send it.
/// Returns how many bytes were sent.
fn send_buffered_bytes(&self, mut buf: MutexGuard<'_, Queue<u8>>) -> usize {
let mut i = 0;
loop {
if !self.inner.can_write_byte() {
// The UART transmit holding register is full,
// so we cannot give it another byte.
// It will interrupt when it's ready for a new byte.
break;
}
// Pop a byte from the front of the queue and send it.
match buf.pop_front() {
Some(b) => self.inner.write_byte(b),
// The buffer is empty, we're finished sending bytes.
None => return 0,
};
i += 1;
// Check if uartputc() is waiting for space in the buffer.
unsafe {
wakeup(addr_of!(*self).cast_mut().cast());
}
}
i
}
}
impl core::ops::Deref for BufferedUart {
type Target = Uart;
fn deref(&self) -> &Self::Target {
&self.inner
}
}
impl From<Uart> for BufferedUart {
fn from(value: Uart) -> Self {
BufferedUart {
inner: value,
buffer: Mutex::new(Queue::new()),
}
}
}
#[derive(Copy, Clone)]
pub struct BufferedUartWriter<'u>(&'u BufferedUart);
impl<'u> core::fmt::Write for BufferedUartWriter<'u> {
fn write_str(&mut self, s: &str) -> core::fmt::Result {
self.0.write_slice_buffered(s.as_bytes());
core::fmt::Result::Ok(())
}
}

View File

@ -1,192 +0,0 @@
//! Virtio device driver.
//!
//! For both the MMIO interface, and virtio descriptors.
//! Only tested with qemu.
//!
//! The virtio spec: https://docs.oasis-open.org/virtio/virtio/v1.1/virtio-v1.1.pdf
//! qemu ... -drive file=fs.img,if=none,format=raw,id=x0 -device virtio-blk-device,drive=x0,bus=virtio-mmio-bus.0
use crate::{io::buf::Buffer, sync::spinlock::Spinlock};
use core::ffi::c_char;
// Virtio MMIO control registers, mapped starting at 0x10001000
// From qemu virtio_mmio.h
/// 0x74726976
pub const VIRTIO_MMIO_MAGIC_VALUE: u64 = 0x000u64;
/// Version - should be 2.
pub const VIRTIO_MMIO_VERSION: u64 = 0x004u64;
/// Device type.
///
/// 1: Network
/// 2: Disk
pub const VIRTIO_MMIO_DEVICE_ID: u64 = 0x008u64;
/// 0x554d4551
pub const VIRTIO_MMIO_VENDOR_ID: u64 = 0x00cu64;
pub const VIRTIO_MMIO_DEVICE_FEATURES: u64 = 0x010u64;
pub const VIRTIO_MMIO_DRIVER_FEATURES: u64 = 0x020u64;
/// Select queue, write-only.
pub const VIRTIO_MMIO_QUEUE_SEL: u64 = 0x030u64;
/// Max size of current queue, read-only.
pub const VIRTIO_MMIO_QUEUE_NUM_MAX: u64 = 0x034u64;
/// Size of current queue, write-only.
pub const VIRTIO_MMIO_QUEUE_NUM: u64 = 0x038u64;
/// Ready bit.
pub const VIRTIO_MMIO_QUEUE_READY: u64 = 0x044u64;
/// Write-only.
pub const VIRTIO_MMIO_QUEUE_NOTIFY: u64 = 0x050u64;
/// Read-only.
pub const VIRTIO_MMIO_INTERRUPT_STATUS: u64 = 0x060u64;
/// Write-only.
pub const VIRTIO_MMIO_INTERRUPT_ACK: u64 = 0x064u64;
/// Read/write.
pub const VIRTIO_MMIO_STATUS: u64 = 0x070u64;
/// Physical address for descriptor table, write-only.
pub const VIRTIO_MMIO_QUEUE_DESC_LOW: u64 = 0x080u64;
pub const VIRTIO_MMIO_QUEUE_DESC_HIGH: u64 = 0x084u64;
/// Physical address for available ring, write-only.
pub const VIRTIO_MMIO_DRIVER_DESC_LOW: u64 = 0x090u64;
pub const VIRTIO_MMIO_DRIVER_DESC_HIGH: u64 = 0x094u64;
/// Physical address for used ring, write-only.
pub const VIRTIO_MMIO_DEVICE_DESC_LOW: u64 = 0x0a0u64;
pub const VIRTIO_MMIO_DEVICE_DESC_HIGH: u64 = 0x0a4u64;
// Status register bits, from qemu virtio_config.h.
pub const VIRTIO_CONFIG_S_ACKNOWLEDGE: u8 = 0x01u8;
pub const VIRTIO_CONFIG_S_DRIVER: u8 = 0x02u8;
pub const VIRTIO_CONFIG_S_DRIVER_OK: u8 = 0x04u8;
pub const VIRTIO_CONFIG_S_FEATURES_OK: u8 = 0x08u8;
// Device feature bits
/// Disk is read-only.
pub const VIRTIO_BLK_F_RO: u8 = 5u8;
/// Supports SCSI command passthrough.
pub const VIRTIO_BLK_F_SCSI: u8 = 7u8;
/// Writeback mode available in config.
pub const VIRTIO_BLK_F_CONFIG_WCE: u8 = 11u8;
/// Support more than one vq.
pub const VIRTIO_BLK_F_MQ: u8 = 12u8;
pub const VIRTIO_F_ANY_LAYOUT: u8 = 27u8;
pub const VIRTIO_RING_F_INDIRECT_DESC: u8 = 28u8;
pub const VIRTIO_RING_F_EVENT_IDX: u8 = 29u8;
/// This many virtio descriptors.
///
/// Must be a power of two.
pub const NUM_DESCRIPTORS: usize = 8usize;
/// A single descriptor, from the spec.
#[repr(C)]
pub struct VirtqDescriptor {
pub addr: u64,
pub len: u32,
pub flags: u16,
pub next: u16,
}
/// Chained with another descriptor.
pub const VRING_DESC_F_NEXT: u16 = 1u16;
/// Device writes (vs read).
pub const VRING_DESC_F_WRITE: u16 = 2u16;
/// The entire avail ring, from the spec.
#[repr(C)]
pub struct VirtqAvailable {
/// Always zero.
pub flags: u16,
/// Driver will write ring[idx] next.
pub idx: u16,
/// Descriptor numbers of chain heads.
pub ring: [u16; NUM_DESCRIPTORS],
pub unused: u16,
}
/// One entry in the "used" ring, with which the
/// device tells the driver about completed requests.
#[repr(C)]
pub struct VirtqUsedElement {
/// Index of start of completed descriptor chain.
pub id: u32,
pub len: u32,
}
#[repr(C)]
pub struct VirtqUsed {
/// Always zero.
pub flags: u16,
/// Device increments it when it adds a ring[] entry.
pub idx: u16,
pub ring: [VirtqUsedElement; NUM_DESCRIPTORS],
}
// These are specific to virtio block devices (disks),
// Described in section 5.2 of the spec.
/// Read the disk.
pub const VIRTIO_BLK_T_IN: u32 = 0u32;
/// Write the disk.
pub const VIRTIO_BLK_T_OUT: u32 = 1u32;
/// The format of the first descriptor in a disk request.
///
/// To be followed by two more descriptors containing
/// the block, and a one-byte status.
#[repr(C)]
pub struct VirtioBlockRequest {
/// 0: Write the disk.
/// 1: Read the disk.
pub kind: u32,
pub reserved: u32,
pub sector: u64,
}
#[repr(C)]
pub struct DiskInfo {
pub b: *mut Buffer,
pub status: c_char,
}
#[repr(C)]
pub struct Disk {
/// A set (not a ring) of DMA descriptors, with which the
/// driver tells the device where to read and write individual
/// disk operations. There are NUM descriptors.
///
/// Most commands consist of a "chain" (linked list)
/// of a couple of these descriptors.
pub descriptors: *mut VirtqDescriptor,
/// A ring in which the driver writes descriptor numbers
/// that the driver would like the device to process. It
/// only includes the head descriptor of each chain. The
/// ring has NUM elements.
pub available: *mut VirtqAvailable,
/// A ring in which the device writes descriptor numbers
/// that the device has finished processing (just the
/// head of each chain). There are NUM used ring entries.
pub used: *mut VirtqUsed,
// Our own book-keeping.
/// Is a descriptor free?
pub free: [c_char; NUM_DESCRIPTORS],
/// We've looked this far in used[2..NUM].
pub used_idx: u16,
/// Track info about in-flight operations,
/// for use when completion interrupt arrives.
///
/// Indexed by first descriptor index of chain.
pub info: [DiskInfo; NUM_DESCRIPTORS],
/// Disk command headers.
/// One-for-one with descriptors, for convenience.
pub ops: [VirtioBlockRequest; NUM_DESCRIPTORS],
pub vdisk_lock: Spinlock,
}
extern "C" {
pub static mut disk: Disk;
pub fn virtio_disk_init();
pub fn virtio_disk_rw(buf: *mut Buffer, write: i32);
pub fn virtio_disk_intr();
}

View File

@ -1,3 +0,0 @@
pub mod arch;
pub mod hardware;
pub mod platform;

View File

@ -1 +0,0 @@
// TODO

View File

@ -1,11 +0,0 @@
#[cfg(feature = "milk-v")]
mod milk_v;
#[cfg(feature = "milk-v")]
pub use milk_v::*;
#[cfg(feature = "qemu-riscv64")]
mod qemu_riscv64;
#[cfg(feature = "qemu-riscv64")]
pub use qemu_riscv64::*;
#[cfg(not(any(feature = "milk-v", feature = "qemu-riscv64")))]
compile_error!("a platform must be selected");

View File

@ -1,23 +0,0 @@
use crate::hal::hardware::uart::BufferedUart;
pub static DIRECT_MAPPED_PAGES: [usize; 1] = [QEMU_POWER];
// Devices: (IRQ, driver)
pub static UARTS: [(usize, BufferedUart); 1] = [(10, BufferedUart::new(0x1000_0000))];
pub static VIRTIO_DISKS: [(usize, usize); 1] = [(1, 0x10001000)];
// Virtio MMIO interface
pub const VIRTIO0: usize = 0x10001000;
pub const VIRTIO0_IRQ: usize = 1;
// Platform Interrupt Controller location
pub const PLIC_BASE_ADDR: usize = 0x0c000000;
/// QEMU test interface. Used for power off and on.
const QEMU_POWER: usize = 0x100000;
pub unsafe fn shutdown() -> ! {
let qemu_power = QEMU_POWER as *mut u32;
qemu_power.write_volatile(0x5555u32);
unreachable!();
}

View File

@ -1,127 +0,0 @@
//! Buffer cache.
//!
//! The buffer cache is a linked list of buf strctures holding
//! cached copies of disk block contents. Caching disk blocks
//! in memory reduces the number of disk reads and also provides
//! a synchronization point for disk blocks used by multiple processes.
//!
//! Interface:
//! - To get a buffer for a particular disk block, call bread.
//! - After changing buffer data, call bwrite to write it to disk.
//! - When done with the buffer, call brelse.
//! - Do not use the buffer after calling brelse.
//! - Only one process at a time can use a buffer,
//! so do not keep them longer than necessary.
use crate::{io::buf::Buffer, sync::spinlock::Spinlock, NBUF};
pub struct BufferCache {
pub buffers: [Buffer; NBUF],
}
impl BufferCache {
/// Look through the buffer cache for block on device dev.
///
/// If not found, allocate a buffer.
/// In either case, return locked buffer.
fn get(&mut self, dev: u32, blockno: u32) {
for buf in &mut self.buffers {
if buf.dev == dev && buf.blockno == blockno {
buf.refcnt += 1;
}
}
}
}
#[repr(C)]
pub struct BCache {
pub lock: Spinlock,
pub buf: [Buffer; NBUF],
pub head: Buffer,
}
extern "C" {
pub static mut bcache: BCache;
pub fn binit();
// pub fn bget(dev: u32, blockno: u32) -> *mut Buffer;
pub fn bread(dev: u32, blockno: u32) -> *mut Buffer;
pub fn bwrite(b: *mut Buffer);
pub fn brelse(b: *mut Buffer);
pub fn bpin(b: *mut Buffer);
pub fn bunpin(b: *mut Buffer);
}
// pub static BUFFER_CACHE: Mutex<BufferCache> = Mutex::new();
// #[no_mangle]
// pub unsafe extern "C" fn bget(dev: u32, blockno: u32) -> *mut Buffer {
// let mut b: *mut Buffer;
// let _guard = bcache.lock.lock();
//
// // Is the block already cached?
// b = bcache.head.next;
// while b != addr_of_mut!(bcache.head) {
// if (*b).dev == dev && (*b).blockno == blockno {
// (*b).refcnt += 1;
// acquiresleep(addr_of_mut!((*b).lock));
// // (*b).lock.lock_unguarded();
// return b;
// } else {
// b = (*b).next;
// }
// }
//
// // Not cached.
// // Recycle the least recently used unused buffer.
// b = bcache.head.prev;
// while b != addr_of_mut!(bcache.head) {
// if (*b).refcnt == 0 {
// (*b).dev = dev;
// (*b).blockno = blockno;
// (*b).valid = 0;
// (*b).refcnt = 1;
// // (*b).lock.lock_unguarded();
// acquiresleep(addr_of_mut!((*b).lock));
// return b;
// }
// }
//
// panic!("bget: no buffers");
// }
// /// Return a locked buffer with the contents of the indicated block.
// #[no_mangle]
// pub unsafe extern "C" fn bread(dev: u32, blockno: u32) -> *mut Buffer {
// let b = bget(dev, blockno);
//
// if (*b).valid == 0 {
// virtio_disk_rw(b, 0);
// (*b).valid = 1;
// }
//
// b
// }
//
// #[no_mangle]
// pub unsafe extern "C" fn bwrite(b: *mut Buffer) {
// if holdingsleep(addr_of_mut!((*b).lock)) == 0 {
// // if !(*b).lock.held_by_current_proc() {
// panic!("bwrite");
// }
//
// virtio_disk_rw(b, 1);
// }
// #[no_mangle]
// pub unsafe extern "C" fn bpin(b: *mut Buffer) {
// let _guard = bcache.lock.lock();
// (*b).refcnt += 1;
// // bcache.lock.unlock();
// }
//
// #[no_mangle]
// pub unsafe extern "C" fn bunpin(b: *mut Buffer) {
// let _guard = bcache.lock.lock();
// (*b).refcnt -= 1;
// // bcache.lock.unlock();
// }
//

View File

@ -1,16 +0,0 @@
use crate::{fs::BSIZE, sync::sleeplock::Sleeplock};
#[repr(C)]
pub struct Buffer {
/// Has data been read from disk?
pub valid: i32,
/// Does disk "own" buf?
pub disk: i32,
pub dev: u32,
pub blockno: u32,
pub lock: Sleeplock,
pub refcnt: u32,
pub prev: *mut Buffer,
pub next: *mut Buffer,
pub data: [u8; BSIZE as usize],
}

View File

@ -1,3 +0,0 @@
pub mod bio;
pub mod buf;
pub mod pipe;

View File

@ -1,165 +0,0 @@
use crate::{
fs::file::{filealloc, fileclose, File, FileType},
hal::arch::virtual_memory::{copyin, copyout},
mem::kalloc::{kalloc, kfree},
proc::{process::Process, scheduler::wakeup},
sync::spinlock::Spinlock,
};
use core::ptr::{addr_of, addr_of_mut};
pub const PIPESIZE: usize = 512;
#[derive(Copy, Clone, Debug, PartialEq)]
pub enum PipeError {
Allocation,
ProcessKilled,
}
pub type Result<T> = core::result::Result<T, PipeError>;
#[repr(C)]
pub struct Pipe {
pub lock: Spinlock,
pub data: [u8; PIPESIZE],
/// Number of bytes read.
pub bytes_read: u32,
/// Number of bytes written.
pub bytes_written: u32,
/// Read fd is still open.
pub is_read_open: i32,
/// Write fd is still open.
pub is_write_open: i32,
}
impl Pipe {
#[allow(clippy::new_ret_no_self)]
pub unsafe fn new(a: *mut *mut File, b: *mut *mut File) -> Result<()> {
*a = filealloc();
*b = filealloc();
let pipe = kalloc() as *mut Pipe;
// If any of them fail, close all and return an error.
if a.is_null() || b.is_null() || pipe.is_null() {
if !pipe.is_null() {
kfree(pipe as *mut u8);
}
if !a.is_null() {
fileclose(*a);
}
if !b.is_null() {
fileclose(*b);
}
Err(PipeError::Allocation)
} else {
*pipe = Pipe::default();
(**a).kind = FileType::Pipe;
(**a).readable = 1;
(**a).writable = 0;
(**a).pipe = pipe;
(**b).kind = FileType::Pipe;
(**b).readable = 0;
(**b).writable = 1;
(**b).pipe = pipe;
Ok(())
}
}
/// Unsafely get a reference to `self`.
///
/// `self.lock` must be held beforehand.
#[allow(clippy::mut_from_ref)]
unsafe fn as_mut(&self) -> &mut Self {
&mut *addr_of!(*self).cast_mut()
}
pub unsafe fn close(&self, writable: i32) {
let _guard = self.lock.lock();
if writable > 0 {
self.as_mut().is_write_open = 0;
wakeup(addr_of!(self.bytes_read).cast_mut().cast());
} else {
self.as_mut().is_read_open = 0;
wakeup(addr_of!(self.bytes_written).cast_mut().cast());
}
if self.is_read_open == 0 && self.is_write_open == 0 {
kfree(addr_of!(*self).cast_mut().cast());
}
}
pub unsafe fn write(&self, addr: u64, num_bytes: usize) -> Result<usize> {
let mut i = 0;
let proc = Process::current().unwrap();
let guard = self.lock.lock();
while i < num_bytes {
if self.is_read_open == 0 || proc.is_killed() {
return Err(PipeError::ProcessKilled);
}
if self.bytes_written == self.bytes_read + PIPESIZE as u32 {
// DOC: pipewrite-full
wakeup(addr_of!(self.bytes_read).cast_mut().cast());
guard.sleep(addr_of!(self.bytes_written).cast_mut().cast());
} else {
let mut b = 0u8;
if copyin(proc.pagetable, addr_of_mut!(b), addr as usize + i, 1) == -1 {
break;
}
let index = self.bytes_written as usize % PIPESIZE;
self.as_mut().data[index] = b;
self.as_mut().bytes_written += 1;
i += 1;
}
}
wakeup(addr_of!(self.bytes_read).cast_mut().cast());
Ok(i)
}
#[allow(clippy::while_immutable_condition)]
pub unsafe fn read(&self, addr: u64, num_bytes: usize) -> Result<usize> {
let mut i = 0;
let proc = Process::current().unwrap();
let guard = self.lock.lock();
// DOC: pipe-empty
while self.bytes_read == self.bytes_written && self.is_write_open > 0 {
if proc.is_killed() {
return Err(PipeError::ProcessKilled);
} else {
// DOC: piperead-sleep
guard.sleep(addr_of!(self.bytes_read).cast_mut().cast());
}
}
// DOC: piperead-copy
while i < num_bytes {
if self.bytes_read == self.bytes_written {
break;
}
let b = self.data[self.bytes_read as usize % PIPESIZE];
self.as_mut().bytes_read += 1;
if copyout(proc.pagetable, addr as usize + i, addr_of!(b).cast_mut(), 1) == -1 {
break;
}
i += 1;
}
wakeup(addr_of!(self.bytes_written).cast_mut().cast());
Ok(i)
}
}
impl Default for Pipe {
fn default() -> Pipe {
Pipe {
lock: Spinlock::new(),
data: [0u8; PIPESIZE],
bytes_read: 0,
bytes_written: 0,
is_read_open: 1,
is_write_open: 1,
}
}
}
#[no_mangle]
pub unsafe extern "C" fn pipealloc(a: *mut *mut File, b: *mut *mut File) -> i32 {
match Pipe::new(a, b) {
Ok(_) => 0,
Err(_) => -1,
}
}

View File

@ -1,139 +0,0 @@
#![no_main]
#![no_std]
#![allow(dead_code)]
#![allow(clippy::missing_safety_doc)]
#![feature(negative_impls)]
#![feature(str_from_raw_parts)]
extern crate alloc;
extern crate core;
mod hal;
mod console;
mod fs;
mod io;
mod mem;
mod proc;
mod queue;
mod string;
mod sync;
mod syscall;
use crate::{proc::cpu::Cpu, sync::mutex::Mutex};
use core::{
ffi::c_char,
ptr::addr_of,
};
pub(crate) use crate::console::printf::{print, println, uprint, uprintln};
pub static mut STARTED: bool = false;
pub static PANICKED: Mutex<bool> = Mutex::new(false);
/// Maximum number of processes
pub const NPROC: usize = 64;
/// Maximum number of CPUs
pub const NCPU: usize = 8;
/// Maximum number of open files per process
pub const NOFILE: usize = 16;
/// Maximum number of open files per system
pub const NFILE: usize = 100;
/// Maximum number of active inodes
pub const NINODE: usize = 50;
/// Maximum major device number
pub const NDEV: usize = 10;
/// Device number of file system root disk
pub const ROOTDEV: usize = 1;
/// Max exec arguments
pub const MAXARG: usize = 32;
/// Max num of blocks any FS op writes
pub const MAXOPBLOCKS: usize = 10;
/// Max data blocks in on-disk log
pub const LOGSIZE: usize = MAXOPBLOCKS * 3;
/// Size of disk block cache
pub const NBUF: usize = MAXOPBLOCKS * 3;
/// Size of file system in blocks
pub const FSSIZE: usize = 2000;
/// Maximum file path size
pub const MAXPATH: usize = 128;
pub unsafe fn main() -> ! {
if Cpu::current_id() == 0 {
console::consoleinit();
mem::kalloc::kinit();
println!("\nxv6 kernel is booting");
hal::arch::virtual_memory::init();
hal::arch::virtual_memory::inithart();
proc::process::procinit();
hal::arch::trap::inithart();
hal::arch::interrupt::init();
hal::arch::interrupt::inithart();
io::bio::binit();
fs::inode::iinit();
hal::hardware::virtio_disk::virtio_disk_init();
proc::process::userinit();
STARTED = true;
} else {
while !STARTED {
core::hint::spin_loop();
}
hal::arch::virtual_memory::inithart();
hal::arch::trap::inithart();
hal::arch::interrupt::inithart();
}
proc::scheduler::scheduler();
}
#[panic_handler]
fn panic_wrapper(panic_info: &core::panic::PanicInfo) -> ! {
if let Some(location) = panic_info.location() {
uprint!("kernel panic ({}): ", location.file());
} else {
uprint!("kernel panic: ");
}
uprintln!("{}", panic_info.message().as_str().unwrap_or("could not recover error message"));
// if let Some(s) = panic_info.message() {
// uprintln!("{}", s);
// } else if let Some(s) = panic_info.payload().downcast_ref::<&str>() {
// uprintln!("{}", s);
// } else if let Some(s) = panic_info.payload().downcast_ref::<&CStr>() {
// uprintln!("{:?}", s);
// } else {
// uprintln!("could not recover error message");
// }
uprintln!("███████╗██╗ ██╗ ██████╗██╗ ██╗██╗██╗");
uprintln!("██╔════╝██║ ██║██╔════╝██║ ██╔╝██║██║");
uprintln!("█████╗ ██║ ██║██║ █████╔╝ ██║██║");
uprintln!("██╔══╝ ██║ ██║██║ ██╔═██╗ ╚═╝╚═╝");
uprintln!("██║ ╚██████╔╝╚██████╗██║ ██╗██╗██╗");
uprintln!("╚═╝ ╚═════╝ ╚═════╝╚═╝ ╚═╝╚═╝╚═╝");
unsafe {
*crate::PANICKED.lock_spinning() = true;
// Quit QEMU for convenience.
crate::syscall::Syscall::Shutdown.call();
}
loop {
core::hint::spin_loop();
}
}
#[no_mangle]
pub unsafe extern "C" fn panic(msg: *const c_char) -> ! {
let mut message = [b' '; 32];
let mut i = 0;
loop {
match *msg.add(i) {
0 => break,
c => message[i] = c as u8,
}
i += 1;
}
let message = core::str::from_raw_parts(addr_of!(message[0]), i);
panic!("panic from c: {}", message);
}

View File

@ -1,113 +0,0 @@
//! Physical memory allocator, for user processes,
//! kernel stacks, page-table pages,
//! and pipe buffers. Allocates whole 4096-byte pages.
use crate::{
hal::arch::mem::{round_up_page, PAGE_SIZE, PHYSICAL_END},
mem::memset,
sync::spinlock::Spinlock,
};
use core::ptr::{addr_of_mut, null_mut};
extern "C" {
// oh my god this is so stupid why the fuck
// this took me so long to figure out it's 3am rn
// First address after kernel. Defined by kernel.ld.
pub static mut end: [u8; 0];
}
#[no_mangle]
pub static mut kmem: KernelMemory = KernelMemory {
lock: Spinlock::new(),
freelist: null_mut(),
};
#[repr(C)]
pub struct Run {
next: *mut Run,
}
#[repr(C)]
pub struct KernelMemory {
pub lock: Spinlock,
pub freelist: *mut Run,
}
pub unsafe fn kinit() {
kmem.lock = Spinlock::new();
freerange(addr_of_mut!(end).cast(), PHYSICAL_END as *mut u8)
}
unsafe fn freerange(pa_start: *mut u8, pa_end: *mut u8) {
let mut p = round_up_page(pa_start as usize) as *mut u8;
while p.add(PAGE_SIZE) <= pa_end {
kfree(p.cast());
p = p.add(PAGE_SIZE);
}
}
/// Free the page of physical memory pointed at by pa,
/// which normally should have been returned by a call
/// to kalloc(). The exception is when initializing the
/// allocator - see kinit above.
#[no_mangle]
pub unsafe extern "C" fn kfree(pa: *mut u8) {
if (pa as usize % PAGE_SIZE) != 0
|| pa <= addr_of_mut!(end) as *mut u8
|| pa >= PHYSICAL_END as *mut u8
{
panic!("kfree");
}
memset(pa, 0, PAGE_SIZE);
let run: *mut Run = pa.cast();
let _guard = kmem.lock.lock();
(*run).next = kmem.freelist;
kmem.freelist = run;
}
/// Allocate one 4096-byte page of physical memory.
///
/// Returns a pointer that the kernel can use.
/// Returns 0 if the memory cannot be allocated.
#[no_mangle]
pub unsafe extern "C" fn kalloc() -> *mut u8 {
let _guard = kmem.lock.lock();
let run = kmem.freelist;
if !run.is_null() {
kmem.freelist = (*run).next;
}
if !run.is_null() {
memset(run.cast(), 0, PAGE_SIZE);
}
run as *mut u8
}
use core::alloc::{GlobalAlloc, Layout};
struct KernelAllocator;
unsafe impl GlobalAlloc for KernelAllocator {
unsafe fn alloc(&self, layout: Layout) -> *mut u8 {
if layout.size() > 4096 {
panic!("can only allocate one page of memory at a time");
}
let ptr = kalloc();
if ptr.is_null() {
panic!("kernel could not allocate memory");
}
ptr
}
unsafe fn dealloc(&self, ptr: *mut u8, _layout: Layout) {
kfree(ptr);
}
}
#[global_allocator]
static GLOBAL: KernelAllocator = KernelAllocator;

View File

@ -1,55 +0,0 @@
pub mod kalloc;
#[no_mangle]
pub unsafe extern "C" fn memset(dst: *mut u8, data: u8, max_bytes: usize) -> *mut u8 {
for i in 0..max_bytes {
*dst.add(i) = data;
}
dst
}
#[no_mangle]
pub unsafe extern "C" fn memcmp(mut a: *const u8, mut b: *const u8, max_bytes: u32) -> i32 {
for _ in 0..max_bytes {
if *a != *b {
return (*a - *b) as i32;
} else {
a = a.add(1);
b = b.add(1);
}
}
0
}
#[no_mangle]
pub unsafe extern "C" fn memmove(mut dst: *mut u8, mut src: *const u8, max_bytes: u32) -> *mut u8 {
if max_bytes == 0 {
return dst;
}
// If src starts before dst and src + max_bytes
// is after d, the memory regions overlap.
if src < dst && src.add(max_bytes as usize) > dst {
dst = dst.add(max_bytes as usize);
src = src.add(max_bytes as usize);
for _ in 0..max_bytes {
dst = dst.sub(1);
src = src.sub(1);
*dst = *src;
}
} else {
for _ in 0..max_bytes {
*dst = *src;
dst = dst.add(1);
src = src.add(1);
}
}
dst
}
#[no_mangle]
pub unsafe extern "C" fn memcpy(dst: *mut u8, src: *const u8, max_bytes: u32) -> *mut u8 {
memmove(dst, src, max_bytes)
}

View File

@ -1,41 +0,0 @@
/// Saved registers for kernel context switches.
#[repr(C)]
#[derive(Copy, Clone, Default)]
pub struct Context {
pub ra: u64,
pub sp: u64,
// callee-saved
pub s0: u64,
pub s1: u64,
pub s2: u64,
pub s3: u64,
pub s4: u64,
pub s5: u64,
pub s6: u64,
pub s7: u64,
pub s8: u64,
pub s9: u64,
pub s10: u64,
pub s11: u64,
}
impl Context {
pub const fn new() -> Context {
Context {
ra: 0u64,
sp: 0u64,
s0: 0u64,
s1: 0u64,
s2: 0u64,
s3: 0u64,
s4: 0u64,
s5: 0u64,
s6: 0u64,
s7: 0u64,
s8: 0u64,
s9: 0u64,
s10: 0u64,
s11: 0u64,
}
}
}

View File

@ -1,45 +0,0 @@
use super::{context::Context, process::Process};
use core::ptr::{addr_of_mut, null_mut};
pub static mut CPUS: [Cpu; crate::NCPU] = [Cpu::new(); crate::NCPU];
/// Per-CPU state.
#[repr(C)]
#[derive(Copy, Clone)]
pub struct Cpu {
pub proc: *mut Process,
/// swtch() here to enter scheduler()
pub context: Context,
/// Depth of push_off() nesting.
pub interrupt_disable_layers: i32,
/// Were interrupts enabled before push_off()?
pub previous_interrupts_enabled: i32,
}
impl Cpu {
pub const fn new() -> Cpu {
Cpu {
proc: null_mut(),
context: Context::new(),
interrupt_disable_layers: 0,
previous_interrupts_enabled: 0,
}
}
/// Must be called with interrupts disabled
/// to prevent race with process being moved
/// to a different CPU.
pub fn current_id() -> usize {
crate::hal::arch::cpu::cpu_id()
}
/// Return this CPU's cpu struct.
/// Interrupts must be disabled.
pub fn current() -> &'static mut Cpu {
unsafe { &mut CPUS[Cpu::current_id()] }
}
}
/// Return this CPU's cpu struct.
/// Interrupts must be disabled.
#[no_mangle]
pub unsafe extern "C" fn mycpu() -> *mut Cpu {
addr_of_mut!(*Cpu::current())
}

View File

@ -1,5 +0,0 @@
pub mod context;
pub mod cpu;
pub mod process;
pub mod scheduler;
pub mod trapframe;

View File

@ -1,578 +0,0 @@
#![allow(clippy::comparison_chain)]
use super::{
context::Context,
cpu::Cpu,
scheduler::{sched, wakeup},
trapframe::Trapframe,
};
use crate::{
fs::{
file::{fileclose, filedup, File},
fsinit,
inode::{idup, iput, namei, Inode},
log::LogOperation,
FS_INITIALIZED,
},
hal::arch::{
mem::{kstack, Pagetable, PAGE_SIZE, PTE_R, PTE_W, PTE_X, TRAMPOLINE, TRAPFRAME},
trap::{usertrapret, InterruptBlocker},
virtual_memory::{
copyout, mappages, uvmalloc, uvmcopy, uvmcreate, uvmdealloc, uvmfirst, uvmfree,
uvmunmap,
},
},
mem::{
kalloc::{kalloc, kfree},
memset,
},
sync::spinlock::Spinlock,
uprintln,
};
use arrayvec::ArrayVec;
use core::{
ffi::{c_char, c_void, CStr},
ptr::{addr_of, addr_of_mut, null_mut},
sync::atomic::{AtomicI32, Ordering},
};
extern "C" {
// trampoline.S
pub static mut trampoline: *mut c_char;
}
pub static NEXT_PID: AtomicI32 = AtomicI32::new(1);
/// Helps ensure that wakeups of wait()ing
/// parents are not lost. Helps obey the
/// memory model when using p->parent.
/// Must be acquired before any p->lock.
pub static mut WAIT_LOCK: Spinlock = Spinlock::new();
pub static mut INITPROC: usize = 0;
pub static mut PROCESSES: ArrayVec<Process, { crate::NPROC }> = ArrayVec::new_const();
/// Initialize the proc table.
pub unsafe fn procinit() {
let mut i = 0;
let processes_iter = core::iter::repeat_with(|| {
let mut p = Process::new();
p.state = ProcessState::Unused;
p.kernel_stack = kstack(i) as u64;
i += 1;
p
});
PROCESSES = processes_iter.take(crate::NPROC).collect();
}
/// Set up the first user process.
pub unsafe fn userinit() {
let p = Process::alloc().unwrap();
INITPROC = addr_of_mut!(*p) as usize;
let initcode: &[u8] = &[
0x17, 0x05, 0x00, 0x00, 0x13, 0x05, 0x45, 0x02, 0x97, 0x05, 0x00, 0x00, 0x93, 0x85, 0x35,
0x02, 0x93, 0x08, 0x70, 0x00, 0x73, 0x00, 0x00, 0x00, 0x93, 0x08, 0x20, 0x00, 0x73, 0x00,
0x00, 0x00, 0xef, 0xf0, 0x9f, 0xff, 0x2f, 0x69, 0x6e, 0x69, 0x74, 0x00, 0x00, 0x24, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
];
// Allocate one user page and copy initcode's
// instructions and data into it.
uvmfirst(p.pagetable, initcode.as_ptr().cast_mut(), initcode.len());
p.memory_allocated = PAGE_SIZE as u64;
// Prepare for the very first "return" from kernel to user.
// User program counter
(*p.trapframe).epc = 0;
// User stack pointer
(*p.trapframe).sp = PAGE_SIZE as u64;
p.current_dir = namei(
CStr::from_bytes_with_nul(b"/\0")
.unwrap()
.as_ptr()
.cast_mut()
.cast(),
);
p.state = ProcessState::Runnable;
p.lock.unlock();
}
#[repr(C)]
#[derive(Copy, Clone, Debug, Default, PartialEq)]
pub enum ProcessState {
#[default]
Unused,
Used,
Sleeping,
Runnable,
Running,
Zombie,
}
#[derive(Copy, Clone, Debug, PartialEq)]
pub enum ProcessError {
MaxProcesses,
Allocation,
NoChildren,
Killed,
PageError,
}
/// Per-process state.
#[repr(C)]
#[derive(Clone)]
pub struct Process {
pub lock: Spinlock,
// p->lock must be held when using these:
/// Process state
pub state: ProcessState,
/// If non-zero, sleeping on chan
pub chan: *mut c_void,
/// If non-zero, have been killed
pub killed: i32,
/// Exit status to be returned to parent's wait
pub exit_status: i32,
/// Process ID
pub pid: i32,
// WAIT_LOCK must be held when using this:
/// Parent process
pub parent: *mut Process,
// These are private to the process, so p->lock need not be held.
/// Virtual address of kernel stack
pub kernel_stack: u64,
/// Size of process memory (bytes)
pub memory_allocated: u64,
/// User page table
pub pagetable: Pagetable,
/// Data page for trampoline.S
pub trapframe: *mut Trapframe,
/// swtch() here to run process
pub context: Context,
/// Open files
pub open_files: [*mut File; crate::NOFILE],
/// Current directory
pub current_dir: *mut Inode,
}
impl Process {
pub const fn new() -> Process {
Process {
lock: Spinlock::new(),
state: ProcessState::Unused,
chan: null_mut(),
killed: 0,
exit_status: 0,
pid: 0,
parent: null_mut(),
kernel_stack: 0,
memory_allocated: 0,
pagetable: null_mut(),
trapframe: null_mut(),
context: Context::new(),
open_files: [null_mut(); crate::NOFILE],
current_dir: null_mut(),
}
}
pub fn current() -> Option<&'static mut Process> {
let _ = InterruptBlocker::new();
let p = Cpu::current().proc;
if p.is_null() {
None
} else {
unsafe { Some(&mut *p) }
}
}
pub fn is_current(&self) -> bool {
addr_of!(*self).cast_mut() == Cpu::current().proc
}
pub fn is_initproc(&self) -> bool {
addr_of!(*self) as usize == unsafe { INITPROC }
}
pub fn alloc_pid() -> i32 {
NEXT_PID.fetch_add(1, Ordering::SeqCst)
}
/// Look in the process table for an UNUSED proc.
/// If found, initialize state required to run in the kernel,
/// and return with p.lock held.
/// If there are no free procs, or a memory allocation fails, return an error.
pub unsafe fn alloc() -> Result<&'static mut Process, ProcessError> {
let mut index: Option<usize> = None;
for (i, p) in PROCESSES.iter_mut().enumerate() {
p.lock.lock_unguarded();
if p.state == ProcessState::Unused {
index = Some(i);
break;
} else {
p.lock.unlock();
}
}
let Some(index) = index else {
return Err(ProcessError::MaxProcesses);
};
let p: &mut Process = &mut PROCESSES[index];
p.pid = Process::alloc_pid();
p.state = ProcessState::Used;
// Allocate a trapframe page.
p.trapframe = kalloc() as *mut Trapframe;
if p.trapframe.is_null() {
p.free();
p.lock.unlock();
return Err(ProcessError::Allocation);
}
// An empty user page table.
p.pagetable = proc_pagetable(addr_of_mut!(*p));
if p.pagetable.is_null() {
p.free();
p.lock.unlock();
return Err(ProcessError::Allocation);
}
// Set up new context to start executing at forkret,
// which returns to userspace.
memset(
addr_of_mut!(p.context).cast(),
0,
core::mem::size_of::<Context>(),
);
p.context.ra = Process::forkret as usize as u64;
p.context.sp = p.kernel_stack + PAGE_SIZE as u64;
Ok(p)
}
/// Free a proc structure and the data hanging from it, including user pages.
/// self.lock must be held.
pub unsafe fn free(&mut self) {
if !self.trapframe.is_null() {
kfree(self.trapframe.cast());
}
self.trapframe = null_mut();
if !self.pagetable.is_null() {
proc_freepagetable(self.pagetable, self.memory_allocated);
}
self.pagetable = null_mut();
self.memory_allocated = 0;
self.pid = 0;
self.parent = null_mut();
self.chan = null_mut();
self.killed = 0;
self.exit_status = 0;
self.state = ProcessState::Unused;
}
/// Grow or shrink user memory.
pub unsafe fn grow_memory(&mut self, num_bytes: i32) -> Result<(), ProcessError> {
let mut size = self.memory_allocated;
if num_bytes > 0 {
size = uvmalloc(
self.pagetable,
size as usize,
size.wrapping_add(num_bytes as u64) as usize,
PTE_W,
);
if size == 0 {
return Err(ProcessError::Allocation);
}
} else if num_bytes < 0 {
size = uvmdealloc(
self.pagetable,
size as usize,
size.wrapping_add(num_bytes as u64) as usize,
);
}
self.memory_allocated = size;
Ok(())
}
/// Create a user page table for a given process,
/// with no user memory, but with trampoline and trapframe pages.
pub unsafe fn alloc_pagetable(&mut self) -> Result<Pagetable, ProcessError> {
// Create an empty page table.
let pagetable: Pagetable = uvmcreate();
if pagetable.is_null() {
return Err(ProcessError::Allocation);
}
// Map the trampoline code (for syscall return)
// at the highest user virtual address.
// Only the supervisor uses it on the way
// to and from user space, so not PTE_U.
if mappages(
pagetable,
TRAMPOLINE,
PAGE_SIZE,
addr_of!(trampoline) as usize,
PTE_R | PTE_X,
) < 0
{
uvmfree(pagetable, 0);
return Err(ProcessError::Allocation);
}
// Map the trapframe page just below the trampoline page for trampoline.S.
if mappages(
pagetable,
TRAPFRAME,
PAGE_SIZE,
self.trapframe as usize,
PTE_R | PTE_W,
) < 0
{
uvmunmap(pagetable, TRAMPOLINE, 1, false);
uvmfree(pagetable, 0);
return Err(ProcessError::Allocation);
}
Ok(pagetable)
}
/// Free a process's pagetable and free the physical memory it refers to.
pub unsafe fn free_pagetable(pagetable: Pagetable, size: usize) {
uvmunmap(pagetable, TRAMPOLINE, 1, false);
uvmunmap(pagetable, TRAPFRAME, 1, false);
uvmfree(pagetable, size)
}
/// Create a new process, copying the parent.
/// Sets up child kernel stack to return as if from fork() syscall.
pub unsafe fn fork() -> Result<i32, ProcessError> {
let parent = Process::current().unwrap();
let child = Process::alloc()?;
// Copy user memory from parent to child.
if uvmcopy(
parent.pagetable,
child.pagetable,
parent.memory_allocated as usize,
) < 0
{
child.free();
child.lock.unlock();
return Err(ProcessError::Allocation);
}
child.memory_allocated = parent.memory_allocated;
// Copy saved user registers.
*child.trapframe = *parent.trapframe;
// Cause fork to return 0 in the child.
(*child.trapframe).a0 = 0;
// Increment reference counts on open file descriptors.
for (i, file) in parent.open_files.iter().enumerate() {
if !file.is_null() {
child.open_files[i] = filedup(parent.open_files[i]);
}
}
child.current_dir = idup(parent.current_dir);
let pid = child.pid;
child.lock.unlock();
{
let _guard = WAIT_LOCK.lock();
child.parent = addr_of!(*parent).cast_mut();
}
{
let _guard = child.lock.lock();
child.state = ProcessState::Runnable;
}
Ok(pid)
}
/// A fork child's very first scheduling by
/// scheduler() will swtch to forkret.
pub unsafe fn forkret() -> ! {
// Still holding p->lock from scheduler.
Process::current().unwrap().lock.unlock();
if !FS_INITIALIZED {
// File system initialization must be run in the context of a
// regular process (e.g., because it calls sleep), and thus
// cannot be run from main().
FS_INITIALIZED = true;
fsinit(crate::ROOTDEV as i32);
}
usertrapret()
}
/// Pass p's abandoned children to init.
/// Caller must hold WAIT_LOCK.
pub unsafe fn reparent(&self) {
for p in PROCESSES.iter_mut() {
if p.parent == addr_of!(*self).cast_mut() {
p.parent = INITPROC as *mut Process;
wakeup((INITPROC as *mut Process).cast());
}
}
}
/// Exit the current process. Does not return.
/// An exited process remains in the zombie state
/// until its parent calls wait().
pub unsafe fn exit(&mut self, status: i32) -> ! {
if self.is_initproc() {
panic!("init exiting");
}
// Close all open files.
for file in self.open_files.iter_mut() {
if !file.is_null() {
fileclose(*file);
*file = null_mut();
}
}
{
let _operation = LogOperation::new();
iput(self.current_dir);
}
self.current_dir = null_mut();
{
let _guard = WAIT_LOCK.lock();
// Give any children to init.
self.reparent();
// Parent might be sleeping in wait().
wakeup(self.parent.cast());
self.lock.lock_unguarded();
self.exit_status = status;
self.state = ProcessState::Zombie;
}
// Jump into the scheduler, never to return.
sched();
unreachable!();
}
/// Wait for a child process to exit, and return its pid.
pub unsafe fn wait_for_child(&mut self, addr: u64) -> Result<i32, ProcessError> {
let guard = WAIT_LOCK.lock();
loop {
// Scan through the table looking for exited children.
let mut has_children = false;
for p in PROCESSES.iter_mut() {
if p.parent == addr_of_mut!(*self) {
has_children = true;
// Ensure the child isn't still in exit() or swtch().
p.lock.lock_unguarded();
if p.state == ProcessState::Zombie {
// Found an exited child.
let pid = p.pid;
if addr != 0
&& copyout(
self.pagetable,
addr as usize,
addr_of_mut!(p.exit_status).cast(),
core::mem::size_of::<i32>(),
) < 0
{
p.lock.unlock();
return Err(ProcessError::PageError);
}
p.free();
p.lock.unlock();
return Ok(pid);
}
p.lock.unlock();
}
}
if !has_children {
return Err(ProcessError::NoChildren);
} else if self.is_killed() {
return Err(ProcessError::Killed);
}
// Wait for child to exit.
// DOC: wait-sleep
guard.sleep(addr_of_mut!(*self).cast());
}
}
/// Kill the process with the given pid.
/// Returns true if the process was killed.
/// The victim won't exit until it tries to return
/// to user space (see usertrap() in trap.c).
pub unsafe fn kill(pid: i32) -> bool {
for p in PROCESSES.iter_mut() {
let _guard = p.lock.lock();
if p.pid == pid {
p.killed = 1;
if p.state == ProcessState::Sleeping {
// Wake process from sleep().
p.state = ProcessState::Runnable;
}
return true;
}
}
false
}
pub fn is_killed(&self) -> bool {
let _guard = self.lock.lock();
self.killed > 0
}
pub fn set_killed(&mut self, killed: bool) {
let _guard = self.lock.lock();
if killed {
self.killed = 1;
} else {
self.killed = 0;
}
}
}
/// Return the current struct proc *, or zero if none.
#[no_mangle]
pub extern "C" fn myproc() -> *mut Process {
if let Some(p) = Process::current() {
p as *mut Process
} else {
null_mut()
}
}
#[no_mangle]
pub unsafe extern "C" fn proc_pagetable(p: *mut Process) -> Pagetable {
(*p).alloc_pagetable().unwrap_or(null_mut())
}
#[no_mangle]
pub unsafe extern "C" fn proc_freepagetable(pagetable: Pagetable, size: u64) {
Process::free_pagetable(pagetable, size as usize)
}
/// Print a process listing to console for debugging.
/// Runs when a user types ^P on console.
/// No lock to avoid wedging a stuck machine further.
pub unsafe fn procdump() {
uprintln!("\nprocdump:");
for p in PROCESSES.iter() {
if p.state != ProcessState::Unused {
uprintln!(" {}: {:?}", p.pid, p.state);
}
}
}

View File

@ -1,128 +0,0 @@
use super::{
context::Context,
cpu::Cpu,
process::{Process, ProcessState, PROCESSES},
};
use crate::{
console::printf::println,
hal::arch,
sync::spinlock::{Spinlock, SpinlockGuard},
};
use core::{
ffi::c_void,
ptr::{addr_of, addr_of_mut, null_mut},
};
extern "C" {
// pub fn wakeup(chan: *const c_void);
// pub fn scheduler() -> !;
pub fn swtch(a: *mut Context, b: *mut Context);
}
/// Give up the CPU for one scheduling round.
pub unsafe fn r#yield() {
let p = Process::current().unwrap();
let _guard = p.lock.lock();
p.state = ProcessState::Runnable;
sched();
}
// Per-CPU process scheduler.
// Each CPU calls scheduler() after setting itself up.
// Scheduler never returns. It loops, doing:
// - choose a process to run.
// - swtch to start running that process.
// - eventually that process transfers control
// via swtch back to the scheduler.
pub unsafe fn scheduler() -> ! {
println!("hart {} starting scheduler", Cpu::current_id());
let cpu = Cpu::current();
cpu.proc = null_mut();
loop {
// Avoid deadlock by ensuring that devices can interrupt.
arch::interrupt::enable_interrupts();
for p in PROCESSES.iter_mut() {
let _guard = p.lock.lock();
if p.state == ProcessState::Runnable {
// Switch to the chosen process. It's the process's job
// to release its lock and then reacquire it before
// jumping back to us.
p.state = ProcessState::Running;
cpu.proc = addr_of!(*p).cast_mut();
// Run the process.
swtch(addr_of_mut!(cpu.context), addr_of_mut!(p.context));
// Process is done running for now.
// It should have changed its state before coming back.
cpu.proc = null_mut();
}
}
}
}
/// Switch to scheduler. Must hold only p->lock
/// and have changed proc->state. Saves and restores
/// previous_interrupts_enabled because previous_interrupts_enabled is a property of this
/// kernel thread, not this CPU. It should
/// be proc->previous_interrupts_enabled and proc->interrupt_disable_layers, but that would
/// break in the few places where a lock is held but
/// there's no process.
pub unsafe fn sched() {
let p = Process::current().unwrap();
let cpu = Cpu::current();
if cpu.interrupt_disable_layers != 1 {
panic!("sched locks");
} else if p.state == ProcessState::Running {
panic!("sched running");
} else if arch::interrupt::interrupts_enabled() > 0 {
panic!("sched interruptible");
}
let previous_interrupts_enabled = cpu.previous_interrupts_enabled;
swtch(addr_of_mut!(p.context), addr_of_mut!(cpu.context));
cpu.previous_interrupts_enabled = previous_interrupts_enabled;
}
/// The lock should already be locked.
/// Unsafely create a new guard for it so that we can call SpinlockGuard.sleep().
#[no_mangle]
pub unsafe extern "C" fn sleep_lock(chan: *mut c_void, lock: *mut Spinlock) {
let lock: &Spinlock = &*lock;
let guard = SpinlockGuard { lock };
guard.sleep(chan);
core::mem::forget(guard);
}
/// Sleep until `wakeup(chan)` is called somewhere else.
pub unsafe fn sleep(chan: *mut c_void) {
let p = Process::current().unwrap();
let _guard = p.lock.lock();
// Go to sleep.
p.chan = chan;
p.state = ProcessState::Sleeping;
sched();
// Tidy up.
p.chan = null_mut();
}
/// Wake up all processes sleeping on chan.
/// Must be called without any p.lock.
#[no_mangle]
pub unsafe extern "C" fn wakeup(chan: *mut c_void) {
for p in PROCESSES.iter_mut() {
if !p.is_current() {
let _guard = p.lock.lock();
if p.state == ProcessState::Sleeping && p.chan == chan {
p.state = ProcessState::Runnable;
}
}
}
}

View File

@ -1,100 +0,0 @@
/// Per-process data for the trap handling code in trampoline.S.
///
/// sits in a page by itself just under the trampoline page in the
/// user page table. not specially mapped in the kernel page table.
/// uservec in trampoline.S saves user registers in the trapframe,
/// then initializes registers from the trapframe's
/// kernel_sp, kernel_hartid, kernel_satp, and jumps to kernel_trap.
/// usertrapret() and userret in trampoline.S set up
/// the trapframe's kernel_*, restore user registers from the
/// trapframe, switch to the user page table, and enter user space.
/// the trapframe includes callee-saved user registers like s0-s11 because the
/// return-to-user path via usertrapret() doesn't return through
/// the entire kernel call stack.
#[repr(C)]
#[derive(Copy, Clone, Default)]
pub struct Trapframe {
/// Kernel page table.
pub kernel_satp: u64,
/// Top of process's kernel stack.
pub kernel_sp: u64,
/// usertrap()
pub kernel_trap: u64,
/// Saved user program counter.
pub epc: u64,
/// Saved kernel tp.
pub kernel_hartid: u64,
pub ra: u64,
pub sp: u64,
pub gp: u64,
pub tp: u64,
pub t0: u64,
pub t1: u64,
pub t2: u64,
pub s0: u64,
pub s1: u64,
pub a0: u64,
pub a1: u64,
pub a2: u64,
pub a3: u64,
pub a4: u64,
pub a5: u64,
pub a6: u64,
pub a7: u64,
pub s2: u64,
pub s3: u64,
pub s4: u64,
pub s5: u64,
pub s6: u64,
pub s7: u64,
pub s8: u64,
pub s9: u64,
pub s10: u64,
pub s11: u64,
pub t3: u64,
pub t4: u64,
pub t5: u64,
pub t6: u64,
}
impl Trapframe {
pub const fn new() -> Trapframe {
Trapframe {
kernel_satp: 0u64,
kernel_sp: 0u64,
kernel_trap: 0u64,
epc: 0u64,
kernel_hartid: 0u64,
ra: 0u64,
sp: 0u64,
gp: 0u64,
tp: 0u64,
t0: 0u64,
t1: 0u64,
t2: 0u64,
s0: 0u64,
s1: 0u64,
a0: 0u64,
a1: 0u64,
a2: 0u64,
a3: 0u64,
a4: 0u64,
a5: 0u64,
a6: 0u64,
a7: 0u64,
s2: 0u64,
s3: 0u64,
s4: 0u64,
s5: 0u64,
s6: 0u64,
s7: 0u64,
s8: 0u64,
s9: 0u64,
s10: 0u64,
s11: 0u64,
t3: 0u64,
t4: 0u64,
t5: 0u64,
t6: 0u64,
}
}
}

View File

@ -1,107 +0,0 @@
use core::iter::*;
pub const QUEUE_SIZE: usize = 64;
#[derive(Copy, Clone, Debug, PartialEq)]
pub enum QueueError {
NoSpace,
}
pub type Result<T> = core::result::Result<T, QueueError>;
#[derive(Copy, Clone, Debug, PartialEq)]
pub struct Queue<T> {
inner: [Option<T>; QUEUE_SIZE],
/// The index of the first item in the queue.
queue_start: usize,
/// The length of the queue.
queue_len: usize,
}
impl<T: Copy> Queue<T> {
pub const fn new() -> Queue<T> {
Queue {
inner: [None; QUEUE_SIZE],
queue_start: 0,
queue_len: 0,
}
}
}
impl<T> Queue<T> {
/// Accessor method for the length of the queue.
pub fn len(&self) -> usize {
self.queue_len
}
pub fn is_empty(&self) -> bool {
self.len() == 0
}
/// Returns how many items can currently be added to the queue.
pub fn space_remaining(&self) -> usize {
self.inner.len() - self.len()
}
/// Returns the index of the last item in the queue.
fn queue_end(&self) -> usize {
(self.queue_start + self.queue_len - 1) % self.inner.len()
}
/// Removes an item from the front of the queue.
pub fn pop_front(&mut self) -> Option<T> {
let item = self.inner[self.queue_start].take();
if item.is_some() {
self.queue_start += 1;
self.queue_start %= self.inner.len();
self.queue_len -= 1;
}
item
}
/// Adds an item to the front of the queue.
pub fn push_front(&mut self, value: T) -> Result<()> {
if self.space_remaining() == 0 {
return Err(QueueError::NoSpace);
}
if self.queue_start == 0 {
self.queue_start = self.inner.len() - 1;
} else {
self.queue_start -= 1;
}
self.inner[self.queue_start] = Some(value);
self.queue_len += 1;
Ok(())
}
/// Removes an item from the end of the queue.
pub fn pop_back(&mut self) -> Option<T> {
let item = self.inner[self.queue_start].take();
if item.is_some() {
self.queue_len -= 1;
}
item
}
/// Adds an item to the end of the queue.
pub fn push_back(&mut self, value: T) -> Result<()> {
if self.space_remaining() == 0 {
return Err(QueueError::NoSpace);
}
self.queue_len += 1;
self.inner[self.queue_end()] = Some(value);
Ok(())
}
}
impl<T> Iterator for Queue<T> {
type Item = T;
fn next(&mut self) -> Option<Self::Item> {
self.pop_front()
}
}
impl<T> DoubleEndedIterator for Queue<T> {
fn next_back(&mut self) -> Option<Self::Item> {
self.pop_back()
}
}
impl<T> ExactSizeIterator for Queue<T> {
fn len(&self) -> usize {
self.len()
}
}

View File

@ -1,79 +0,0 @@
use core::ffi::c_char;
pub(crate) unsafe fn strlen_checked(s: *const c_char, max_chars: usize) -> Option<i32> {
for len in 0..max_chars {
if (*s.add(len)) == '\0' as i8 {
return Some(len.try_into().unwrap_or(i32::MAX));
}
}
None
}
#[no_mangle]
pub unsafe extern "C" fn strlen(s: *const c_char) -> i32 {
strlen_checked(s, usize::MAX).unwrap_or(i32::MAX)
}
#[no_mangle]
pub unsafe extern "C" fn strncmp(mut a: *const u8, mut b: *const u8, mut max_chars: u32) -> i32 {
while max_chars > 0 && *a != 0 && *a == *b {
max_chars -= 1;
a = a.add(1);
b = b.add(1);
}
if max_chars == 0 {
0
} else {
(*a - *b) as i32
}
}
#[no_mangle]
pub unsafe extern "C" fn strncpy(
mut a: *mut u8,
mut b: *const u8,
mut max_chars: i32,
) -> *const u8 {
let original_a = a;
while max_chars > 0 && *b != 0 {
*a = *b;
max_chars -= 1;
a = a.add(1);
b = b.add(1);
}
while max_chars > 0 {
*a = 0;
max_chars -= 1;
a = a.add(1);
}
original_a
}
/// Like strncpy but guaranteed to null-terminate.
#[no_mangle]
pub unsafe extern "C" fn safestrcpy(
mut a: *mut u8,
mut b: *const u8,
mut max_chars: i32,
) -> *const u8 {
let original_a = a;
if max_chars <= 0 {
return a;
} else {
max_chars -= 1;
}
while max_chars > 0 && *b != 0 {
*a = *b;
max_chars -= 1;
a = a.add(1);
b = b.add(1);
}
*a = 0;
original_a
}

View File

@ -1,115 +0,0 @@
use super::LockStrategy;
use crate::proc::{
process::{Process, ProcessState},
scheduler::{sched, sleep, wakeup},
};
use core::{
cell::UnsafeCell,
ptr::{addr_of, null_mut},
sync::atomic::{AtomicBool, Ordering},
};
pub struct Lock {
locked: AtomicBool,
lock_strategy: UnsafeCell<LockStrategy>,
}
impl Lock {
pub const fn new() -> Lock {
Lock {
locked: AtomicBool::new(false),
lock_strategy: UnsafeCell::new(LockStrategy::Spin),
}
}
pub fn lock_strategy(&self) -> LockStrategy {
unsafe { *self.lock_strategy.get() }
}
pub unsafe fn lock_unguarded(&self, lock_strategy: LockStrategy) {
// Lock it first, then store the lock strategy.
match lock_strategy {
LockStrategy::Spin => {
crate::hal::arch::trap::push_intr_off();
while self.locked.swap(true, Ordering::Acquire) {
core::hint::spin_loop();
}
}
LockStrategy::Sleep => {
while self.locked.swap(true, Ordering::Acquire) {
// Put the process to sleep until the mutex gets released.
sleep(addr_of!(*self).cast_mut().cast());
}
}
};
*self.lock_strategy.get() = lock_strategy;
}
pub fn lock(&self, lock_strategy: LockStrategy) -> LockGuard<'_> {
unsafe {
self.lock_unguarded(lock_strategy);
}
LockGuard { lock: self }
}
pub fn lock_spinning(&self) -> LockGuard<'_> {
self.lock(LockStrategy::Spin)
}
pub fn lock_sleeping(&self) -> LockGuard<'_> {
self.lock(LockStrategy::Sleep)
}
pub unsafe fn unlock(&self) {
let lock_strategy = self.lock_strategy();
self.locked.store(false, Ordering::Release);
match lock_strategy {
LockStrategy::Spin => {
crate::hal::arch::trap::pop_intr_off();
}
LockStrategy::Sleep => {
wakeup(addr_of!(*self).cast_mut().cast());
}
}
}
}
impl Default for Lock {
fn default() -> Lock {
Lock::new()
}
}
impl Clone for Lock {
fn clone(&self) -> Self {
Lock {
locked: AtomicBool::new(self.locked.load(Ordering::SeqCst)),
lock_strategy: UnsafeCell::new(self.lock_strategy()),
}
}
}
unsafe impl Sync for Lock {}
pub struct LockGuard<'l> {
pub lock: &'l Lock,
}
impl<'l> LockGuard<'l> {
/// Sleep until `wakeup(chan)` is called somewhere
/// else, yielding access to the lock until then.
pub unsafe fn sleep(&self, chan: *mut core::ffi::c_void) {
let proc = Process::current().unwrap();
let _guard = proc.lock.lock();
let strategy = self.lock.lock_strategy();
self.lock.unlock();
// Put the process to sleep.
proc.chan = chan;
proc.state = ProcessState::Sleeping;
sched();
// Tidy up and reacquire the lock.
proc.chan = null_mut();
self.lock.lock_unguarded(strategy);
}
}
impl<'l> Drop for LockGuard<'l> {
fn drop(&mut self) {
unsafe { self.lock.unlock() }
}
}

View File

@ -1,13 +0,0 @@
pub mod lock;
pub mod mutex;
// These have to stick around until the entire program is in rust =(
pub mod sleeplock;
pub mod spinlock;
#[derive(Copy, Clone, Debug, Default, PartialEq)]
pub enum LockStrategy {
#[default]
Spin,
Sleep,
}

View File

@ -1,93 +0,0 @@
use super::{
lock::{Lock, LockGuard},
LockStrategy,
};
use core::{
cell::UnsafeCell,
ops::{Deref, DerefMut},
};
pub struct Mutex<T> {
lock: Lock,
inner: UnsafeCell<T>,
}
impl<T> Mutex<T> {
pub const fn new(value: T) -> Mutex<T> {
Mutex {
lock: Lock::new(),
inner: UnsafeCell::new(value),
}
}
pub unsafe fn as_inner(&self) -> *mut T {
self.inner.get()
}
pub unsafe fn lock_unguarded(&self, lock_strategy: LockStrategy) {
self.lock.lock_unguarded(lock_strategy);
}
pub fn lock(&self, lock_strategy: LockStrategy) -> MutexGuard<'_, T> {
unsafe {
self.lock_unguarded(lock_strategy);
}
MutexGuard { mutex: self }
}
pub fn lock_spinning(&self) -> MutexGuard<'_, T> {
self.lock(LockStrategy::Spin)
}
pub fn lock_sleeping(&self) -> MutexGuard<'_, T> {
self.lock(LockStrategy::Sleep)
}
pub unsafe fn unlock(&self) {
self.lock.unlock();
}
}
unsafe impl<T> Sync for Mutex<T> where T: Send {}
impl<T> Clone for Mutex<T>
where
T: Clone,
{
fn clone(&self) -> Self {
let value: T = self.lock_spinning().as_ref().clone();
Mutex::new(value)
}
}
pub struct MutexGuard<'m, T> {
pub mutex: &'m Mutex<T>,
}
impl<'m, T> MutexGuard<'m, T> {
/// Sleep until `wakeup(chan)` is called somewhere else, yielding access to the mutex until then.
pub unsafe fn sleep(&mut self, chan: *mut core::ffi::c_void) {
let guard = LockGuard {
lock: &self.mutex.lock,
};
guard.sleep(chan);
core::mem::forget(guard);
}
}
impl<'m, T> Deref for MutexGuard<'m, T> {
type Target = T;
fn deref(&self) -> &Self::Target {
unsafe { &*self.mutex.as_inner() }
}
}
impl<'m, T> DerefMut for MutexGuard<'m, T> {
fn deref_mut(&mut self) -> &mut Self::Target {
unsafe { &mut *self.mutex.as_inner() }
}
}
impl<'m, T> AsRef<T> for MutexGuard<'m, T> {
fn as_ref(&self) -> &T {
self.deref()
}
}
impl<'m, T> AsMut<T> for MutexGuard<'m, T> {
fn as_mut(&mut self) -> &mut T {
self.deref_mut()
}
}
impl<'m, T> Drop for MutexGuard<'m, T> {
fn drop(&mut self) {
unsafe { self.mutex.unlock() }
}
}

View File

@ -1,67 +0,0 @@
use crate::proc::scheduler::{sleep, wakeup};
use core::{
ffi::c_char,
ptr::addr_of,
sync::atomic::{AtomicBool, Ordering},
};
#[repr(C)]
#[derive(Default)]
pub struct Sleeplock {
pub locked: AtomicBool,
}
impl Sleeplock {
pub const fn new() -> Sleeplock {
Sleeplock {
locked: AtomicBool::new(false),
}
}
#[allow(clippy::while_immutable_condition)]
pub unsafe fn lock_unguarded(&self) {
while self.locked.swap(true, Ordering::Acquire) {
// Put the process to sleep until it gets released.
sleep(addr_of!(*self).cast_mut().cast());
}
}
pub fn lock(&self) -> SleeplockGuard<'_> {
unsafe {
self.lock_unguarded();
}
SleeplockGuard { lock: self }
}
pub unsafe fn unlock(&self) {
self.locked.store(false, Ordering::Release);
wakeup(addr_of!(*self).cast_mut().cast());
}
}
impl Clone for Sleeplock {
fn clone(&self) -> Self {
Sleeplock {
locked: AtomicBool::new(self.locked.load(Ordering::SeqCst)),
}
}
}
pub struct SleeplockGuard<'l> {
pub lock: &'l Sleeplock,
}
impl<'l> Drop for SleeplockGuard<'l> {
fn drop(&mut self) {
unsafe { self.lock.unlock() }
}
}
#[no_mangle]
pub unsafe extern "C" fn initsleeplock(lock: *mut Sleeplock, _name: *mut c_char) {
(*lock) = Sleeplock::new();
}
#[no_mangle]
pub unsafe extern "C" fn acquiresleep(lock: *mut Sleeplock) {
(*lock).lock_unguarded();
}
#[no_mangle]
pub unsafe extern "C" fn releasesleep(lock: *mut Sleeplock) {
(*lock).unlock();
}

View File

@ -1,92 +0,0 @@
use crate::{
hal::arch::trap::{pop_intr_off, push_intr_off},
proc::{
process::{Process, ProcessState},
scheduler::sched,
},
};
use core::{
ffi::c_char,
ptr::null_mut,
sync::atomic::{AtomicBool, Ordering},
};
#[repr(C)]
#[derive(Default)]
pub struct Spinlock {
pub locked: AtomicBool,
}
impl Spinlock {
/// Initializes a `Spinlock`.
pub const fn new() -> Spinlock {
Spinlock {
locked: AtomicBool::new(false),
}
}
pub unsafe fn lock_unguarded(&self) {
push_intr_off();
while self.locked.swap(true, Ordering::Acquire) {
core::hint::spin_loop();
}
}
pub fn lock(&self) -> SpinlockGuard<'_> {
unsafe {
self.lock_unguarded();
}
SpinlockGuard { lock: self }
}
pub unsafe fn unlock(&self) {
self.locked.store(false, Ordering::Release);
pop_intr_off();
}
}
impl Clone for Spinlock {
fn clone(&self) -> Self {
Spinlock {
locked: AtomicBool::new(self.locked.load(Ordering::SeqCst)),
}
}
}
pub struct SpinlockGuard<'l> {
pub lock: &'l Spinlock,
}
impl<'l> SpinlockGuard<'l> {
/// Sleep until `wakeup(chan)` is called somewhere else, yielding the lock until then.
pub unsafe fn sleep(&self, chan: *mut core::ffi::c_void) {
let proc = Process::current().unwrap();
let _guard = proc.lock.lock();
self.lock.unlock();
// Put the process to sleep.
proc.chan = chan;
proc.state = ProcessState::Sleeping;
sched();
// Tidy up and reacquire the lock.
proc.chan = null_mut();
self.lock.lock_unguarded();
}
}
impl<'l> Drop for SpinlockGuard<'l> {
fn drop(&mut self) {
unsafe { self.lock.unlock() }
}
}
#[no_mangle]
pub unsafe extern "C" fn initlock(lock: *mut Spinlock, _name: *mut c_char) {
*lock = Spinlock::new();
}
#[no_mangle]
pub unsafe extern "C" fn acquire(lock: *mut Spinlock) {
(*lock).lock_unguarded();
}
#[no_mangle]
pub unsafe extern "C" fn release(lock: *mut Spinlock) {
(*lock).unlock();
}

View File

@ -1,399 +0,0 @@
use crate::{
fs::{
file::{self, File},
inode::{ilock, iput, iunlock, namei},
log::LogOperation,
stat::KIND_DIR,
},
hal::{
arch::{
clock::CLOCK_TICKS,
virtual_memory::{copyin, copyinstr},
},
platform::shutdown,
},
println,
proc::process::Process,
string::strlen,
NOFILE,
};
use core::{
mem::size_of,
ptr::{addr_of, addr_of_mut, null_mut},
};
extern "C" {
fn sys_pipe() -> u64;
fn sys_exec() -> u64;
fn sys_fstat() -> u64;
fn sys_chdir() -> u64;
fn sys_open() -> u64;
fn sys_mknod() -> u64;
fn sys_unlink() -> u64;
fn sys_link() -> u64;
fn sys_mkdir() -> u64;
}
pub enum Syscall {
Fork,
Exit,
Wait,
Pipe,
Read,
Kill,
Exec,
Fstat,
Chdir,
Dup,
Getpid,
Sbrk,
Sleep,
Uptime,
Open,
Write,
Mknod,
Unlink,
Link,
Mkdir,
Close,
Shutdown,
}
impl Syscall {
pub unsafe fn call(&self) -> u64 {
match self {
Syscall::Fork => Process::fork().unwrap_or(-1) as i64 as u64,
Syscall::Exit => {
let mut status = 0i32;
argint(0, addr_of_mut!(status));
Process::current().unwrap().exit(status)
}
Syscall::Wait => {
let mut p = 0u64;
argaddr(0, addr_of_mut!(p));
Process::current().unwrap().wait_for_child(p).unwrap_or(-1) as i64 as u64
// process::wait(p) as u64
}
Syscall::Pipe => sys_pipe(),
Syscall::Read => {
let mut file: *mut File = null_mut();
let mut num_bytes: i32 = 0;
let mut ptr: u64 = 0;
if argfd(0, null_mut(), addr_of_mut!(file)) >= 0 {
argaddr(1, addr_of_mut!(ptr));
argint(2, addr_of_mut!(num_bytes));
file::fileread(file, ptr, num_bytes) as i64 as u64
} else {
-1i64 as u64
}
}
Syscall::Kill => {
let mut pid = 0i32;
argint(0, addr_of_mut!(pid));
Process::kill(pid) as u64
}
Syscall::Exec => sys_exec(),
Syscall::Fstat => {
let mut file: *mut File = null_mut();
// User pointer to struct stat.
let mut stat: u64 = 0;
if argfd(0, null_mut(), addr_of_mut!(file)) >= 0 {
argaddr(1, addr_of_mut!(stat));
file::filestat(file, stat) as i64 as u64
} else {
-1i64 as u64
}
}
Syscall::Chdir => {
let mut path = [0u8; crate::MAXPATH];
let proc = Process::current().unwrap();
let _operation = LogOperation::new();
if argstr(0, addr_of_mut!(path).cast(), path.len() as i32) < 0 {
return -1i64 as u64;
}
let inode = namei(addr_of_mut!(path).cast());
if inode.is_null() {
return -1i64 as u64;
}
ilock(inode);
if (*inode).kind != KIND_DIR {
iunlock(inode);
iput(inode);
return -1i64 as u64;
}
iunlock(inode);
iput(proc.current_dir);
proc.current_dir = inode;
0
}
Syscall::Dup => {
let mut file: *mut File = null_mut();
if argfd(0, null_mut(), addr_of_mut!(file)) < 0 {
return -1i64 as u64;
}
let Ok(file_descriptor) = fdalloc(file) else {
return -1i64 as u64;
};
file::filedup(file);
file_descriptor as u64
}
Syscall::Getpid => Process::current().unwrap().pid as u64,
Syscall::Sbrk => {
let mut n = 0i32;
argint(0, addr_of_mut!(n));
let proc = Process::current().unwrap();
let addr = proc.memory_allocated;
if unsafe { proc.grow_memory(n).is_ok() } {
addr
} else {
-1i64 as u64
}
}
Syscall::Sleep => {
let mut n = 0i32;
argint(0, addr_of_mut!(n));
let mut ticks = CLOCK_TICKS.lock_spinning();
while *ticks < *ticks + n as usize {
if Process::current().unwrap().is_killed() {
return -1i64 as u64;
}
// Sleep until the value changes.
ticks.sleep(addr_of!(CLOCK_TICKS).cast_mut().cast());
}
0
}
// Returns how many clock tick interrupts have occured since start.
Syscall::Uptime => *CLOCK_TICKS.lock_spinning() as u64,
Syscall::Open => sys_open(),
Syscall::Write => {
let mut file: *mut File = null_mut();
let mut num_bytes: i32 = 0;
let mut ptr: u64 = 0;
if argfd(0, null_mut(), addr_of_mut!(file)) >= 0 {
argaddr(1, addr_of_mut!(ptr));
argint(2, addr_of_mut!(num_bytes));
file::filewrite(file, ptr, num_bytes) as i64 as u64
} else {
-1i64 as u64
}
}
Syscall::Mknod => sys_mknod(),
Syscall::Unlink => sys_unlink(),
Syscall::Link => sys_link(),
Syscall::Mkdir => sys_mkdir(),
Syscall::Close => {
let mut file_descriptor: i32 = 0;
let mut file: *mut File = null_mut();
if argfd(0, addr_of_mut!(file_descriptor), addr_of_mut!(file)) >= 0 {
Process::current().unwrap().open_files[file_descriptor as usize] = null_mut();
file::fileclose(file);
0
} else {
-1i64 as u64
}
}
Syscall::Shutdown => unsafe { shutdown() },
}
}
}
impl TryFrom<usize> for Syscall {
type Error = ();
fn try_from(value: usize) -> core::result::Result<Self, Self::Error> {
match value {
1 => Ok(Syscall::Fork),
2 => Ok(Syscall::Exit),
3 => Ok(Syscall::Wait),
4 => Ok(Syscall::Pipe),
5 => Ok(Syscall::Read),
6 => Ok(Syscall::Kill),
7 => Ok(Syscall::Exec),
8 => Ok(Syscall::Fstat),
9 => Ok(Syscall::Chdir),
10 => Ok(Syscall::Dup),
11 => Ok(Syscall::Getpid),
12 => Ok(Syscall::Sbrk),
13 => Ok(Syscall::Sleep),
14 => Ok(Syscall::Uptime),
15 => Ok(Syscall::Open),
16 => Ok(Syscall::Write),
17 => Ok(Syscall::Mknod),
18 => Ok(Syscall::Unlink),
19 => Ok(Syscall::Link),
20 => Ok(Syscall::Mkdir),
21 => Ok(Syscall::Close),
22 => Ok(Syscall::Shutdown),
_ => Err(()),
}
}
}
impl From<Syscall> for usize {
fn from(syscall: Syscall) -> usize {
match syscall {
Syscall::Fork => 1,
Syscall::Exit => 2,
Syscall::Wait => 3,
Syscall::Pipe => 4,
Syscall::Read => 5,
Syscall::Kill => 6,
Syscall::Exec => 7,
Syscall::Fstat => 8,
Syscall::Chdir => 9,
Syscall::Dup => 10,
Syscall::Getpid => 11,
Syscall::Sbrk => 12,
Syscall::Sleep => 13,
Syscall::Uptime => 14,
Syscall::Open => 15,
Syscall::Write => 16,
Syscall::Mknod => 17,
Syscall::Unlink => 18,
Syscall::Link => 19,
Syscall::Mkdir => 20,
Syscall::Close => 21,
Syscall::Shutdown => 22,
}
}
}
/// Fetch the u64 at addr from the current process.
#[no_mangle]
pub unsafe extern "C" fn fetchaddr(addr: u64, ip: *mut u64) -> i32 {
let proc = Process::current().unwrap();
// Both tests needed, in case of overflow.
if addr >= proc.memory_allocated
|| addr + size_of::<u64>() as u64 > proc.memory_allocated
|| copyin(
proc.pagetable,
ip.cast(),
addr as usize,
size_of::<*mut u64>(),
) != 0
{
-1
} else {
0
}
}
/// Fetch the null-terminated string at addr from the current process.
///
/// Returns length of string, not including null, or -1 for error.
#[no_mangle]
pub unsafe extern "C" fn fetchstr(addr: u64, buf: *mut u8, max: i32) -> i32 {
let proc = Process::current().unwrap();
if copyinstr(proc.pagetable, buf, addr as usize, max as u32 as usize) < 0 {
-1
} else {
strlen(buf.cast())
}
}
/// Allocate a file descriptor for the given file.
/// Takes over file reference from caller on success.
unsafe fn fdalloc(file: *mut File) -> Result<usize, ()> {
let proc = Process::current().unwrap();
for file_descriptor in 0..crate::NOFILE {
if proc.open_files[file_descriptor].is_null() {
proc.open_files[file_descriptor] = file;
return Ok(file_descriptor);
}
}
Err(())
}
unsafe fn argraw(argument_index: usize) -> u64 {
let proc = Process::current().unwrap();
match argument_index {
0 => (*proc.trapframe).a0,
1 => (*proc.trapframe).a1,
2 => (*proc.trapframe).a2,
3 => (*proc.trapframe).a3,
4 => (*proc.trapframe).a4,
5 => (*proc.trapframe).a5,
_ => panic!("argraw"),
}
}
/// Fetch the n-th 32-bit syscall argument.
#[no_mangle]
pub unsafe extern "C" fn argint(n: i32, ip: *mut i32) {
*ip = argraw(n as usize) as i32;
}
/// Retrieve an argument as a pointer.
///
/// Doesn't check for legality, since
/// copyin/copyout will do that.
#[no_mangle]
pub unsafe extern "C" fn argaddr(n: i32, ip: *mut u64) {
*ip = argraw(n as usize);
}
/// Fetch the n-th word-sized syscall argument as a file descriptor
/// and return both the descriptor and the corresponding struct file.
#[no_mangle]
pub unsafe extern "C" fn argfd(
n: i32,
file_descriptor_out: *mut i32,
file_out: *mut *mut File,
) -> i32 {
let file_descriptor = argraw(n as usize) as usize;
if file_descriptor >= NOFILE {
return -1;
}
let file: *mut File = Process::current().unwrap().open_files[file_descriptor];
if file.is_null() {
return -1;
}
if !file_descriptor_out.is_null() {
*file_descriptor_out = file_descriptor as i32;
}
if !file_out.is_null() {
*file_out = file;
}
0
}
/// Fetch the n-th word-sized syscall argument as a null-terminated string.
///
/// Copies into buf, at most max.
/// Returns string length if ok (including null), -1 if error.
#[no_mangle]
pub unsafe extern "C" fn argstr(n: i32, buf: *mut u8, max: i32) -> i32 {
let mut addr = 0u64;
argaddr(n, addr_of_mut!(addr));
fetchstr(addr, buf, max)
}
pub unsafe fn syscall() {
let proc = Process::current().unwrap();
let num = (*proc.trapframe).a7;
(*proc.trapframe).a0 = match TryInto::<Syscall>::try_into(num as usize) {
Ok(syscall) => syscall.call(),
Err(_) => {
println!("{} unknown syscall {}", proc.pid, num);
-1i64 as u64
}
};
}

55
kernel/sleeplock.c Normal file
View File

@ -0,0 +1,55 @@
// Sleeping locks
#include "types.h"
#include "riscv.h"
#include "defs.h"
#include "param.h"
#include "memlayout.h"
#include "spinlock.h"
#include "proc.h"
#include "sleeplock.h"
void
initsleeplock(struct sleeplock *lk, char *name)
{
initlock(&lk->lk, "sleep lock");
lk->name = name;
lk->locked = 0;
lk->pid = 0;
}
void
acquiresleep(struct sleeplock *lk)
{
acquire(&lk->lk);
while (lk->locked) {
sleep(lk, &lk->lk);
}
lk->locked = 1;
lk->pid = myproc()->pid;
release(&lk->lk);
}
void
releasesleep(struct sleeplock *lk)
{
acquire(&lk->lk);
lk->locked = 0;
lk->pid = 0;
wakeup(lk);
release(&lk->lk);
}
int
holdingsleep(struct sleeplock *lk)
{
int r;
acquire(&lk->lk);
r = lk->locked && (lk->pid == myproc()->pid);
release(&lk->lk);
return r;
}

View File

@ -1,10 +1,10 @@
#include "types.h"
#include "spinlock.h"
#pragma once
// Long-term locks for processes
struct sleeplock {
// Is the lock held?
uint8 locked;
uint locked; // Is the lock held?
struct spinlock lk; // spinlock protecting this sleep lock
// For debugging:
char *name; // Name of lock.
int pid; // Process holding lock
};

110
kernel/spinlock.c Normal file
View File

@ -0,0 +1,110 @@
// Mutual exclusion spin locks.
#include "types.h"
#include "param.h"
#include "memlayout.h"
#include "spinlock.h"
#include "riscv.h"
#include "proc.h"
#include "defs.h"
void
initlock(struct spinlock *lk, char *name)
{
lk->name = name;
lk->locked = 0;
lk->cpu = 0;
}
// Acquire the lock.
// Loops (spins) until the lock is acquired.
void
acquire(struct spinlock *lk)
{
push_off(); // disable interrupts to avoid deadlock.
if(holding(lk))
panic("acquire");
// On RISC-V, sync_lock_test_and_set turns into an atomic swap:
// a5 = 1
// s1 = &lk->locked
// amoswap.w.aq a5, a5, (s1)
while(__sync_lock_test_and_set(&lk->locked, 1) != 0)
;
// Tell the C compiler and the processor to not move loads or stores
// past this point, to ensure that the critical section's memory
// references happen strictly after the lock is acquired.
// On RISC-V, this emits a fence instruction.
__sync_synchronize();
// Record info about lock acquisition for holding() and debugging.
lk->cpu = mycpu();
}
// Release the lock.
void
release(struct spinlock *lk)
{
if(!holding(lk))
panic("release");
lk->cpu = 0;
// Tell the C compiler and the CPU to not move loads or stores
// past this point, to ensure that all the stores in the critical
// section are visible to other CPUs before the lock is released,
// and that loads in the critical section occur strictly before
// the lock is released.
// On RISC-V, this emits a fence instruction.
__sync_synchronize();
// Release the lock, equivalent to lk->locked = 0.
// This code doesn't use a C assignment, since the C standard
// implies that an assignment might be implemented with
// multiple store instructions.
// On RISC-V, sync_lock_release turns into an atomic swap:
// s1 = &lk->locked
// amoswap.w zero, zero, (s1)
__sync_lock_release(&lk->locked);
pop_off();
}
// Check whether this cpu is holding the lock.
// Interrupts must be off.
int
holding(struct spinlock *lk)
{
int r;
r = (lk->locked && lk->cpu == mycpu());
return r;
}
// push_off/pop_off are like intr_off()/intr_on() except that they are matched:
// it takes two pop_off()s to undo two push_off()s. Also, if interrupts
// are initially off, then push_off, pop_off leaves them off.
void
push_off(void)
{
int old = intr_get();
intr_off();
if(mycpu()->noff == 0)
mycpu()->intena = old;
mycpu()->noff += 1;
}
void
pop_off(void)
{
struct cpu *c = mycpu();
if(intr_get())
panic("pop_off - interruptible");
if(c->noff < 1)
panic("pop_off");
c->noff -= 1;
if(c->noff == 0 && c->intena)
intr_on();
}

View File

@ -1,9 +1,9 @@
#include "types.h"
#pragma once
// Mutual exclusion lock.
struct spinlock {
// Is the lock held?
uint8 locked;
uint locked; // Is the lock held?
// For debugging:
char *name; // Name of lock.
struct cpu *cpu; // The cpu holding the lock.
};

89
kernel/start.c Normal file
View File

@ -0,0 +1,89 @@
#include "types.h"
#include "param.h"
#include "memlayout.h"
#include "riscv.h"
#include "defs.h"
void main();
void timerinit();
// entry.S needs one stack per CPU.
__attribute__ ((aligned (16))) char stack0[4096 * NCPU];
// a scratch area per CPU for machine-mode timer interrupts.
uint64 timer_scratch[NCPU][5];
// assembly code in kernelvec.S for machine-mode timer interrupt.
extern void timervec();
// entry.S jumps here in machine mode on stack0.
void
start()
{
// set M Previous Privilege mode to Supervisor, for mret.
unsigned long x = r_mstatus();
x &= ~MSTATUS_MPP_MASK;
x |= MSTATUS_MPP_S;
w_mstatus(x);
// set M Exception Program Counter to main, for mret.
// requires gcc -mcmodel=medany
w_mepc((uint64)main);
// disable paging for now.
w_satp(0);
// delegate all interrupts and exceptions to supervisor mode.
w_medeleg(0xffff);
w_mideleg(0xffff);
w_sie(r_sie() | SIE_SEIE | SIE_STIE | SIE_SSIE);
// configure Physical Memory Protection to give supervisor mode
// access to all of physical memory.
w_pmpaddr0(0x3fffffffffffffull);
w_pmpcfg0(0xf);
// ask for clock interrupts.
timerinit();
// keep each CPU's hartid in its tp register, for cpuid().
int id = r_mhartid();
w_tp(id);
// switch to supervisor mode and jump to main().
asm volatile("mret");
}
// arrange to receive timer interrupts.
// they will arrive in machine mode at
// at timervec in kernelvec.S,
// which turns them into software interrupts for
// devintr() in trap.c.
void
timerinit()
{
// each CPU has a separate source of timer interrupts.
int id = r_mhartid();
// ask the CLINT for a timer interrupt.
int interval = 1000000; // cycles; about 1/10th second in qemu.
*(uint64*)CLINT_MTIMECMP(id) = *(uint64*)CLINT_MTIME + interval;
// prepare information in scratch[] for timervec.
// scratch[0..2] : space for timervec to save registers.
// scratch[3] : address of CLINT MTIMECMP register.
// scratch[4] : desired interval (in cycles) between timer interrupts.
uint64 *scratch = &timer_scratch[id][0];
scratch[3] = CLINT_MTIMECMP(id);
scratch[4] = interval;
w_mscratch((uint64)scratch);
// set the machine-mode trap handler.
w_mtvec((uint64)timervec);
// enable machine-mode interrupts.
w_mstatus(r_mstatus() | MSTATUS_MIE);
// enable machine-mode timer interrupts.
w_mie(r_mie() | MIE_MTIE);
}

View File

@ -1,5 +1,3 @@
#include "types.h"
#define T_DIR 1 // Directory
#define T_FILE 2 // File
#define T_DEVICE 3 // Device

107
kernel/string.c Normal file
View File

@ -0,0 +1,107 @@
#include "types.h"
void*
memset(void *dst, int c, uint n)
{
char *cdst = (char *) dst;
int i;
for(i = 0; i < n; i++){
cdst[i] = c;
}
return dst;
}
int
memcmp(const void *v1, const void *v2, uint n)
{
const uchar *s1, *s2;
s1 = v1;
s2 = v2;
while(n-- > 0){
if(*s1 != *s2)
return *s1 - *s2;
s1++, s2++;
}
return 0;
}
void*
memmove(void *dst, const void *src, uint n)
{
const char *s;
char *d;
if(n == 0)
return dst;
s = src;
d = dst;
if(s < d && s + n > d){
s += n;
d += n;
while(n-- > 0)
*--d = *--s;
} else
while(n-- > 0)
*d++ = *s++;
return dst;
}
// memcpy exists to placate GCC. Use memmove.
void*
memcpy(void *dst, const void *src, uint n)
{
return memmove(dst, src, n);
}
int
strncmp(const char *p, const char *q, uint n)
{
while(n > 0 && *p && *p == *q)
n--, p++, q++;
if(n == 0)
return 0;
return (uchar)*p - (uchar)*q;
}
char*
strncpy(char *s, const char *t, int n)
{
char *os;
os = s;
while(n-- > 0 && (*s++ = *t++) != 0)
;
while(n-- > 0)
*s++ = 0;
return os;
}
// Like strncpy but guaranteed to NUL-terminate.
char*
safestrcpy(char *s, const char *t, int n)
{
char *os;
os = s;
if(n <= 0)
return os;
while(--n > 0 && (*s++ = *t++) != 0)
;
*s = 0;
return os;
}
int
strlen(const char *s)
{
int n;
for(n = 0; s[n]; n++)
;
return n;
}

147
kernel/syscall.c Normal file
View File

@ -0,0 +1,147 @@
#include "types.h"
#include "param.h"
#include "memlayout.h"
#include "riscv.h"
#include "spinlock.h"
#include "proc.h"
#include "syscall.h"
#include "defs.h"
// Fetch the uint64 at addr from the current process.
int
fetchaddr(uint64 addr, uint64 *ip)
{
struct proc *p = myproc();
if(addr >= p->sz || addr+sizeof(uint64) > p->sz) // both tests needed, in case of overflow
return -1;
if(copyin(p->pagetable, (char *)ip, addr, sizeof(*ip)) != 0)
return -1;
return 0;
}
// Fetch the nul-terminated string at addr from the current process.
// Returns length of string, not including nul, or -1 for error.
int
fetchstr(uint64 addr, char *buf, int max)
{
struct proc *p = myproc();
if(copyinstr(p->pagetable, buf, addr, max) < 0)
return -1;
return strlen(buf);
}
static uint64
argraw(int n)
{
struct proc *p = myproc();
switch (n) {
case 0:
return p->trapframe->a0;
case 1:
return p->trapframe->a1;
case 2:
return p->trapframe->a2;
case 3:
return p->trapframe->a3;
case 4:
return p->trapframe->a4;
case 5:
return p->trapframe->a5;
}
panic("argraw");
return -1;
}
// Fetch the nth 32-bit system call argument.
void
argint(int n, int *ip)
{
*ip = argraw(n);
}
// Retrieve an argument as a pointer.
// Doesn't check for legality, since
// copyin/copyout will do that.
void
argaddr(int n, uint64 *ip)
{
*ip = argraw(n);
}
// Fetch the nth word-sized system call argument as a null-terminated string.
// Copies into buf, at most max.
// Returns string length if OK (including nul), -1 if error.
int
argstr(int n, char *buf, int max)
{
uint64 addr;
argaddr(n, &addr);
return fetchstr(addr, buf, max);
}
// Prototypes for the functions that handle system calls.
extern uint64 sys_fork(void);
extern uint64 sys_exit(void);
extern uint64 sys_wait(void);
extern uint64 sys_pipe(void);
extern uint64 sys_read(void);
extern uint64 sys_kill(void);
extern uint64 sys_exec(void);
extern uint64 sys_fstat(void);
extern uint64 sys_chdir(void);
extern uint64 sys_dup(void);
extern uint64 sys_getpid(void);
extern uint64 sys_sbrk(void);
extern uint64 sys_sleep(void);
extern uint64 sys_uptime(void);
extern uint64 sys_open(void);
extern uint64 sys_write(void);
extern uint64 sys_mknod(void);
extern uint64 sys_unlink(void);
extern uint64 sys_link(void);
extern uint64 sys_mkdir(void);
extern uint64 sys_close(void);
// An array mapping syscall numbers from syscall.h
// to the function that handles the system call.
static uint64 (*syscalls[])(void) = {
[SYS_fork] sys_fork,
[SYS_exit] sys_exit,
[SYS_wait] sys_wait,
[SYS_pipe] sys_pipe,
[SYS_read] sys_read,
[SYS_kill] sys_kill,
[SYS_exec] sys_exec,
[SYS_fstat] sys_fstat,
[SYS_chdir] sys_chdir,
[SYS_dup] sys_dup,
[SYS_getpid] sys_getpid,
[SYS_sbrk] sys_sbrk,
[SYS_sleep] sys_sleep,
[SYS_uptime] sys_uptime,
[SYS_open] sys_open,
[SYS_write] sys_write,
[SYS_mknod] sys_mknod,
[SYS_unlink] sys_unlink,
[SYS_link] sys_link,
[SYS_mkdir] sys_mkdir,
[SYS_close] sys_close,
};
void
syscall(void)
{
int num;
struct proc *p = myproc();
num = p->trapframe->a7;
if(num > 0 && num < NELEM(syscalls) && syscalls[num]) {
// Use num to lookup the system call function for num, call it,
// and store its return value in p->trapframe->a0
p->trapframe->a0 = syscalls[num]();
} else {
printf("%d %s: unknown sys call %d\n",
p->pid, p->name, num);
p->trapframe->a0 = -1;
}
}

View File

@ -20,4 +20,3 @@
#define SYS_link 19
#define SYS_mkdir 20
#define SYS_close 21
#define SYS_shutdown 22

View File

@ -18,7 +18,21 @@
// Fetch the nth word-sized system call argument as a file descriptor
// and return both the descriptor and the corresponding struct file.
int argfd(int n, int *pfd, struct file **pf);
static int
argfd(int n, int *pfd, struct file **pf)
{
int fd;
struct file *f;
argint(n, &fd);
if(fd < 0 || fd >= NOFILE || (f=myproc()->ofile[fd]) == 0)
return -1;
if(pfd)
*pfd = fd;
if(pf)
*pf = f;
return 0;
}
// Allocate a file descriptor for the given file.
// Takes over file reference from caller on success.
@ -28,10 +42,8 @@ fdalloc(struct file *f)
int fd;
struct proc *p = myproc();
for (fd = 0; fd < NOFILE; fd++)
{
if (p->ofile[fd] == 0)
{
for(fd = 0; fd < NOFILE; fd++){
if(p->ofile[fd] == 0){
p->ofile[fd] = f;
return fd;
}
@ -39,6 +51,74 @@ fdalloc(struct file *f)
return -1;
}
uint64
sys_dup(void)
{
struct file *f;
int fd;
if(argfd(0, 0, &f) < 0)
return -1;
if((fd=fdalloc(f)) < 0)
return -1;
filedup(f);
return fd;
}
uint64
sys_read(void)
{
struct file *f;
int n;
uint64 p;
argaddr(1, &p);
argint(2, &n);
if(argfd(0, 0, &f) < 0)
return -1;
return fileread(f, p, n);
}
uint64
sys_write(void)
{
struct file *f;
int n;
uint64 p;
argaddr(1, &p);
argint(2, &n);
if(argfd(0, 0, &f) < 0)
return -1;
return filewrite(f, p, n);
}
uint64
sys_close(void)
{
int fd;
struct file *f;
if(argfd(0, &fd, &f) < 0)
return -1;
myproc()->ofile[fd] = 0;
fileclose(f);
return 0;
}
uint64
sys_fstat(void)
{
struct file *f;
uint64 st; // user pointer to struct stat
argaddr(1, &st);
if(argfd(0, 0, &f) < 0)
return -1;
return filestat(f, st);
}
// Create the path new as a link to the same inode as old.
uint64
sys_link(void)
@ -46,19 +126,17 @@ sys_link(void)
char name[DIRSIZ], new[MAXPATH], old[MAXPATH];
struct inode *dp, *ip;
if (argstr(0, old, MAXPATH) < 0 || argstr(1, new, MAXPATH) < 0)
if(argstr(0, old, MAXPATH) < 0 || argstr(1, new, MAXPATH) < 0)
return -1;
begin_op();
if ((ip = namei(old)) == 0)
{
if((ip = namei(old)) == 0){
end_op();
return -1;
}
ilock(ip);
if (ip->type == T_DIR)
{
if(ip->type == T_DIR){
iunlockput(ip);
end_op();
return -1;
@ -68,11 +146,10 @@ sys_link(void)
iupdate(ip);
iunlock(ip);
if ((dp = nameiparent(new, name)) == 0)
if((dp = nameiparent(new, name)) == 0)
goto bad;
ilock(dp);
if (dp->dev != ip->dev || dirlink(dp, name, ip->inum) < 0)
{
if(dp->dev != ip->dev || dirlink(dp, name, ip->inum) < 0){
iunlockput(dp);
goto bad;
}
@ -99,11 +176,10 @@ isdirempty(struct inode *dp)
int off;
struct dirent de;
for (off = 2 * sizeof(de); off < dp->size; off += sizeof(de))
{
if (readi(dp, 0, (uint64)&de, off, sizeof(de)) != sizeof(de))
for(off=2*sizeof(de); off<dp->size; off+=sizeof(de)){
if(readi(dp, 0, (uint64)&de, off, sizeof(de)) != sizeof(de))
panic("isdirempty: readi");
if (de.inum != 0)
if(de.inum != 0)
return 0;
}
return 1;
@ -117,12 +193,11 @@ sys_unlink(void)
char name[DIRSIZ], path[MAXPATH];
uint off;
if (argstr(0, path, MAXPATH) < 0)
if(argstr(0, path, MAXPATH) < 0)
return -1;
begin_op();
if ((dp = nameiparent(path, name)) == 0)
{
if((dp = nameiparent(path, name)) == 0){
end_op();
return -1;
}
@ -130,26 +205,24 @@ sys_unlink(void)
ilock(dp);
// Cannot unlink "." or "..".
if (namecmp(name, ".") == 0 || namecmp(name, "..") == 0)
if(namecmp(name, ".") == 0 || namecmp(name, "..") == 0)
goto bad;
if ((ip = dirlookup(dp, name, &off)) == 0)
if((ip = dirlookup(dp, name, &off)) == 0)
goto bad;
ilock(ip);
if (ip->nlink < 1)
if(ip->nlink < 1)
panic("unlink: nlink < 1");
if (ip->type == T_DIR && !isdirempty(ip))
{
if(ip->type == T_DIR && !isdirempty(ip)){
iunlockput(ip);
goto bad;
}
memset(&de, 0, sizeof(de));
if (writei(dp, 0, (uint64)&de, off, sizeof(de)) != sizeof(de))
if(writei(dp, 0, (uint64)&de, off, sizeof(de)) != sizeof(de))
panic("unlink: writei");
if (ip->type == T_DIR)
{
if(ip->type == T_DIR){
dp->nlink--;
iupdate(dp);
}
@ -169,29 +242,27 @@ bad:
return -1;
}
static struct inode *
static struct inode*
create(char *path, short type, short major, short minor)
{
struct inode *ip, *dp;
char name[DIRSIZ];
if ((dp = nameiparent(path, name)) == 0)
if((dp = nameiparent(path, name)) == 0)
return 0;
ilock(dp);
if ((ip = dirlookup(dp, name, 0)) != 0)
{
if((ip = dirlookup(dp, name, 0)) != 0){
iunlockput(dp);
ilock(ip);
if (type == T_FILE && (ip->type == T_FILE || ip->type == T_DEVICE))
if(type == T_FILE && (ip->type == T_FILE || ip->type == T_DEVICE))
return ip;
iunlockput(ip);
return 0;
}
if ((ip = ialloc(dp->dev, type)) == 0)
{
if((ip = ialloc(dp->dev, type)) == 0){
iunlockput(dp);
return 0;
}
@ -202,20 +273,18 @@ create(char *path, short type, short major, short minor)
ip->nlink = 1;
iupdate(ip);
if (type == T_DIR)
{ // Create . and .. entries.
if(type == T_DIR){ // Create . and .. entries.
// No ip->nlink++ for ".": avoid cyclic ref count.
if (dirlink(ip, ".", ip->inum) < 0 || dirlink(ip, "..", dp->inum) < 0)
if(dirlink(ip, ".", ip->inum) < 0 || dirlink(ip, "..", dp->inum) < 0)
goto fail;
}
if (dirlink(dp, name, ip->inum) < 0)
if(dirlink(dp, name, ip->inum) < 0)
goto fail;
if (type == T_DIR)
{
if(type == T_DIR){
// now that success is guaranteed:
dp->nlink++; // for ".."
dp->nlink++; // for ".."
iupdate(dp);
}
@ -223,7 +292,7 @@ create(char *path, short type, short major, short minor)
return ip;
fail:
fail:
// something went wrong. de-allocate ip.
ip->nlink = 0;
iupdate(ip);
@ -242,59 +311,48 @@ sys_open(void)
int n;
argint(1, &omode);
if ((n = argstr(0, path, MAXPATH)) < 0)
if((n = argstr(0, path, MAXPATH)) < 0)
return -1;
begin_op();
if (omode & O_CREATE)
{
if(omode & O_CREATE){
ip = create(path, T_FILE, 0, 0);
if (ip == 0)
{
if(ip == 0){
end_op();
return -1;
}
}
else
{
if ((ip = namei(path)) == 0)
{
} else {
if((ip = namei(path)) == 0){
end_op();
return -1;
}
ilock(ip);
if (ip->type == T_DIR && omode != O_RDONLY)
{
if(ip->type == T_DIR && omode != O_RDONLY){
iunlockput(ip);
end_op();
return -1;
}
}
if (ip->type == T_DEVICE && (ip->major < 0 || ip->major >= NDEV))
{
if(ip->type == T_DEVICE && (ip->major < 0 || ip->major >= NDEV)){
iunlockput(ip);
end_op();
return -1;
}
if ((f = filealloc()) == 0 || (fd = fdalloc(f)) < 0)
{
if (f)
if((f = filealloc()) == 0 || (fd = fdalloc(f)) < 0){
if(f)
fileclose(f);
iunlockput(ip);
end_op();
return -1;
}
if (ip->type == T_DEVICE)
{
if(ip->type == T_DEVICE){
f->type = FD_DEVICE;
f->major = ip->major;
}
else
{
} else {
f->type = FD_INODE;
f->off = 0;
}
@ -302,8 +360,7 @@ sys_open(void)
f->readable = !(omode & O_WRONLY);
f->writable = (omode & O_WRONLY) || (omode & O_RDWR);
if ((omode & O_TRUNC) && ip->type == T_FILE)
{
if((omode & O_TRUNC) && ip->type == T_FILE){
itrunc(ip);
}
@ -320,8 +377,7 @@ sys_mkdir(void)
struct inode *ip;
begin_op();
if (argstr(0, path, MAXPATH) < 0 || (ip = create(path, T_DIR, 0, 0)) == 0)
{
if(argstr(0, path, MAXPATH) < 0 || (ip = create(path, T_DIR, 0, 0)) == 0){
end_op();
return -1;
}
@ -340,9 +396,8 @@ sys_mknod(void)
begin_op();
argint(1, &major);
argint(2, &minor);
if ((argstr(0, path, MAXPATH)) < 0 ||
(ip = create(path, T_DEVICE, major, minor)) == 0)
{
if((argstr(0, path, MAXPATH)) < 0 ||
(ip = create(path, T_DEVICE, major, minor)) == 0){
end_op();
return -1;
}
@ -351,6 +406,31 @@ sys_mknod(void)
return 0;
}
uint64
sys_chdir(void)
{
char path[MAXPATH];
struct inode *ip;
struct proc *p = myproc();
begin_op();
if(argstr(0, path, MAXPATH) < 0 || (ip = namei(path)) == 0){
end_op();
return -1;
}
ilock(ip);
if(ip->type != T_DIR){
iunlockput(ip);
end_op();
return -1;
}
iunlock(ip);
iput(p->cwd);
end_op();
p->cwd = ip;
return 0;
}
uint64
sys_exec(void)
{
@ -359,42 +439,37 @@ sys_exec(void)
uint64 uargv, uarg;
argaddr(1, &uargv);
if (argstr(0, path, MAXPATH) < 0)
{
if(argstr(0, path, MAXPATH) < 0) {
return -1;
}
memset(argv, 0, sizeof(argv));
for (i = 0;; i++)
{
if (i >= NELEM(argv))
{
for(i=0;; i++){
if(i >= NELEM(argv)){
goto bad;
}
if (fetchaddr(uargv + sizeof(uint64) * i, (uint64 *)&uarg) < 0)
{
if(fetchaddr(uargv+sizeof(uint64)*i, (uint64*)&uarg) < 0){
goto bad;
}
if (uarg == 0)
{
if(uarg == 0){
argv[i] = 0;
break;
}
argv[i] = kalloc();
if (argv[i] == 0)
if(argv[i] == 0)
goto bad;
if (fetchstr(uarg, argv[i], PGSIZE) < 0)
if(fetchstr(uarg, argv[i], PGSIZE) < 0)
goto bad;
}
int ret = exec(path, argv);
for (i = 0; i < NELEM(argv) && argv[i] != 0; i++)
for(i = 0; i < NELEM(argv) && argv[i] != 0; i++)
kfree(argv[i]);
return ret;
bad:
for (i = 0; i < NELEM(argv) && argv[i] != 0; i++)
bad:
for(i = 0; i < NELEM(argv) && argv[i] != 0; i++)
kfree(argv[i]);
return -1;
}
@ -408,20 +483,18 @@ sys_pipe(void)
struct proc *p = myproc();
argaddr(0, &fdarray);
if (pipealloc(&rf, &wf) < 0)
if(pipealloc(&rf, &wf) < 0)
return -1;
fd0 = -1;
if ((fd0 = fdalloc(rf)) < 0 || (fd1 = fdalloc(wf)) < 0)
{
if (fd0 >= 0)
if((fd0 = fdalloc(rf)) < 0 || (fd1 = fdalloc(wf)) < 0){
if(fd0 >= 0)
p->ofile[fd0] = 0;
fileclose(rf);
fileclose(wf);
return -1;
}
if (copyout(p->pagetable, fdarray, (char *)&fd0, sizeof(fd0)) < 0 ||
copyout(p->pagetable, fdarray + sizeof(fd0), (char *)&fd1, sizeof(fd1)) < 0)
{
if(copyout(p->pagetable, fdarray, (char*)&fd0, sizeof(fd0)) < 0 ||
copyout(p->pagetable, fdarray+sizeof(fd0), (char *)&fd1, sizeof(fd1)) < 0){
p->ofile[fd0] = 0;
p->ofile[fd1] = 0;
fileclose(rf);

91
kernel/sysproc.c Normal file
View File

@ -0,0 +1,91 @@
#include "types.h"
#include "riscv.h"
#include "defs.h"
#include "param.h"
#include "memlayout.h"
#include "spinlock.h"
#include "proc.h"
uint64
sys_exit(void)
{
int n;
argint(0, &n);
exit(n);
return 0; // not reached
}
uint64
sys_getpid(void)
{
return myproc()->pid;
}
uint64
sys_fork(void)
{
return fork();
}
uint64
sys_wait(void)
{
uint64 p;
argaddr(0, &p);
return wait(p);
}
uint64
sys_sbrk(void)
{
uint64 addr;
int n;
argint(0, &n);
addr = myproc()->sz;
if(growproc(n) < 0)
return -1;
return addr;
}
uint64
sys_sleep(void)
{
int n;
uint ticks0;
argint(0, &n);
acquire(&tickslock);
ticks0 = ticks;
while(ticks - ticks0 < n){
if(killed(myproc())){
release(&tickslock);
return -1;
}
sleep(&ticks, &tickslock);
}
release(&tickslock);
return 0;
}
uint64
sys_kill(void)
{
int pid;
argint(0, &pid);
return kill(pid);
}
// return how many clock tick interrupts have occurred
// since start.
uint64
sys_uptime(void)
{
uint xticks;
acquire(&tickslock);
xticks = ticks;
release(&tickslock);
return xticks;
}

221
kernel/trap.c Normal file
View File

@ -0,0 +1,221 @@
#include "types.h"
#include "param.h"
#include "memlayout.h"
#include "riscv.h"
#include "spinlock.h"
#include "proc.h"
#include "defs.h"
struct spinlock tickslock;
uint ticks;
extern char trampoline[], uservec[], userret[];
// in kernelvec.S, calls kerneltrap().
void kernelvec();
extern int devintr();
void
trapinit(void)
{
initlock(&tickslock, "time");
}
// set up to take exceptions and traps while in the kernel.
void
trapinithart(void)
{
w_stvec((uint64)kernelvec);
}
//
// handle an interrupt, exception, or system call from user space.
// called from trampoline.S
//
void
usertrap(void)
{
int which_dev = 0;
if((r_sstatus() & SSTATUS_SPP) != 0)
panic("usertrap: not from user mode");
// send interrupts and exceptions to kerneltrap(),
// since we're now in the kernel.
w_stvec((uint64)kernelvec);
struct proc *p = myproc();
// save user program counter.
p->trapframe->epc = r_sepc();
if(r_scause() == 8){
// system call
if(killed(p))
exit(-1);
// sepc points to the ecall instruction,
// but we want to return to the next instruction.
p->trapframe->epc += 4;
// an interrupt will change sepc, scause, and sstatus,
// so enable only now that we're done with those registers.
intr_on();
syscall();
} else if((which_dev = devintr()) != 0){
// ok
} else {
printf("usertrap(): unexpected scause %p pid=%d\n", r_scause(), p->pid);
printf(" sepc=%p stval=%p\n", r_sepc(), r_stval());
setkilled(p);
}
if(killed(p))
exit(-1);
// give up the CPU if this is a timer interrupt.
if(which_dev == 2)
yield();
usertrapret();
}
//
// return to user space
//
void
usertrapret(void)
{
struct proc *p = myproc();
// we're about to switch the destination of traps from
// kerneltrap() to usertrap(), so turn off interrupts until
// we're back in user space, where usertrap() is correct.
intr_off();
// send syscalls, interrupts, and exceptions to uservec in trampoline.S
uint64 trampoline_uservec = TRAMPOLINE + (uservec - trampoline);
w_stvec(trampoline_uservec);
// set up trapframe values that uservec will need when
// the process next traps into the kernel.
p->trapframe->kernel_satp = r_satp(); // kernel page table
p->trapframe->kernel_sp = p->kstack + PGSIZE; // process's kernel stack
p->trapframe->kernel_trap = (uint64)usertrap;
p->trapframe->kernel_hartid = r_tp(); // hartid for cpuid()
// set up the registers that trampoline.S's sret will use
// to get to user space.
// set S Previous Privilege mode to User.
unsigned long x = r_sstatus();
x &= ~SSTATUS_SPP; // clear SPP to 0 for user mode
x |= SSTATUS_SPIE; // enable interrupts in user mode
w_sstatus(x);
// set S Exception Program Counter to the saved user pc.
w_sepc(p->trapframe->epc);
// tell trampoline.S the user page table to switch to.
uint64 satp = MAKE_SATP(p->pagetable);
// jump to userret in trampoline.S at the top of memory, which
// switches to the user page table, restores user registers,
// and switches to user mode with sret.
uint64 trampoline_userret = TRAMPOLINE + (userret - trampoline);
((void (*)(uint64))trampoline_userret)(satp);
}
// interrupts and exceptions from kernel code go here via kernelvec,
// on whatever the current kernel stack is.
void
kerneltrap()
{
int which_dev = 0;
uint64 sepc = r_sepc();
uint64 sstatus = r_sstatus();
uint64 scause = r_scause();
if((sstatus & SSTATUS_SPP) == 0)
panic("kerneltrap: not from supervisor mode");
if(intr_get() != 0)
panic("kerneltrap: interrupts enabled");
if((which_dev = devintr()) == 0){
printf("scause %p\n", scause);
printf("sepc=%p stval=%p\n", r_sepc(), r_stval());
panic("kerneltrap");
}
// give up the CPU if this is a timer interrupt.
if(which_dev == 2 && myproc() != 0 && myproc()->state == RUNNING)
yield();
// the yield() may have caused some traps to occur,
// so restore trap registers for use by kernelvec.S's sepc instruction.
w_sepc(sepc);
w_sstatus(sstatus);
}
void
clockintr()
{
acquire(&tickslock);
ticks++;
wakeup(&ticks);
release(&tickslock);
}
// check if it's an external interrupt or software interrupt,
// and handle it.
// returns 2 if timer interrupt,
// 1 if other device,
// 0 if not recognized.
int
devintr()
{
uint64 scause = r_scause();
if((scause & 0x8000000000000000L) &&
(scause & 0xff) == 9){
// this is a supervisor external interrupt, via PLIC.
// irq indicates which device interrupted.
int irq = plic_claim();
if(irq == UART0_IRQ){
uartintr();
} else if(irq == VIRTIO0_IRQ){
virtio_disk_intr();
} else if(irq){
printf("unexpected interrupt irq=%d\n", irq);
}
// the PLIC allows each device to raise at most one
// interrupt at a time; tell the PLIC the device is
// now allowed to interrupt again.
if(irq)
plic_complete(irq);
return 1;
} else if(scause == 0x8000000000000001L){
// software interrupt from a machine-mode timer interrupt,
// forwarded by timervec in kernelvec.S.
if(cpuid() == 0){
clockintr();
}
// acknowledge the software interrupt by clearing
// the SSIP bit in sip.
w_sip(r_sip() & ~2);
return 2;
} else {
return 0;
}
}

View File

@ -1,5 +1,3 @@
#pragma once
typedef unsigned int uint;
typedef unsigned short ushort;
typedef unsigned char uchar;

190
kernel/uart.c Normal file
View File

@ -0,0 +1,190 @@
//
// low-level driver routines for 16550a UART.
//
#include "types.h"
#include "param.h"
#include "memlayout.h"
#include "riscv.h"
#include "spinlock.h"
#include "proc.h"
#include "defs.h"
// the UART control registers are memory-mapped
// at address UART0. this macro returns the
// address of one of the registers.
#define Reg(reg) ((volatile unsigned char *)(UART0 + reg))
// the UART control registers.
// some have different meanings for
// read vs write.
// see http://byterunner.com/16550.html
#define RHR 0 // receive holding register (for input bytes)
#define THR 0 // transmit holding register (for output bytes)
#define IER 1 // interrupt enable register
#define IER_RX_ENABLE (1<<0)
#define IER_TX_ENABLE (1<<1)
#define FCR 2 // FIFO control register
#define FCR_FIFO_ENABLE (1<<0)
#define FCR_FIFO_CLEAR (3<<1) // clear the content of the two FIFOs
#define ISR 2 // interrupt status register
#define LCR 3 // line control register
#define LCR_EIGHT_BITS (3<<0)
#define LCR_BAUD_LATCH (1<<7) // special mode to set baud rate
#define LSR 5 // line status register
#define LSR_RX_READY (1<<0) // input is waiting to be read from RHR
#define LSR_TX_IDLE (1<<5) // THR can accept another character to send
#define ReadReg(reg) (*(Reg(reg)))
#define WriteReg(reg, v) (*(Reg(reg)) = (v))
// the transmit output buffer.
struct spinlock uart_tx_lock;
#define UART_TX_BUF_SIZE 32
char uart_tx_buf[UART_TX_BUF_SIZE];
uint64 uart_tx_w; // write next to uart_tx_buf[uart_tx_w % UART_TX_BUF_SIZE]
uint64 uart_tx_r; // read next from uart_tx_buf[uart_tx_r % UART_TX_BUF_SIZE]
extern volatile int panicked; // from printf.c
void uartstart();
void
uartinit(void)
{
// disable interrupts.
WriteReg(IER, 0x00);
// special mode to set baud rate.
WriteReg(LCR, LCR_BAUD_LATCH);
// LSB for baud rate of 38.4K.
WriteReg(0, 0x03);
// MSB for baud rate of 38.4K.
WriteReg(1, 0x00);
// leave set-baud mode,
// and set word length to 8 bits, no parity.
WriteReg(LCR, LCR_EIGHT_BITS);
// reset and enable FIFOs.
WriteReg(FCR, FCR_FIFO_ENABLE | FCR_FIFO_CLEAR);
// enable transmit and receive interrupts.
WriteReg(IER, IER_TX_ENABLE | IER_RX_ENABLE);
initlock(&uart_tx_lock, "uart");
}
// add a character to the output buffer and tell the
// UART to start sending if it isn't already.
// blocks if the output buffer is full.
// because it may block, it can't be called
// from interrupts; it's only suitable for use
// by write().
void
uartputc(int c)
{
acquire(&uart_tx_lock);
if(panicked){
for(;;)
;
}
while(uart_tx_w == uart_tx_r + UART_TX_BUF_SIZE){
// buffer is full.
// wait for uartstart() to open up space in the buffer.
sleep(&uart_tx_r, &uart_tx_lock);
}
uart_tx_buf[uart_tx_w % UART_TX_BUF_SIZE] = c;
uart_tx_w += 1;
uartstart();
release(&uart_tx_lock);
}
// alternate version of uartputc() that doesn't
// use interrupts, for use by kernel printf() and
// to echo characters. it spins waiting for the uart's
// output register to be empty.
void
uartputc_sync(int c)
{
push_off();
if(panicked){
for(;;)
;
}
// wait for Transmit Holding Empty to be set in LSR.
while((ReadReg(LSR) & LSR_TX_IDLE) == 0)
;
WriteReg(THR, c);
pop_off();
}
// if the UART is idle, and a character is waiting
// in the transmit buffer, send it.
// caller must hold uart_tx_lock.
// called from both the top- and bottom-half.
void
uartstart()
{
while(1){
if(uart_tx_w == uart_tx_r){
// transmit buffer is empty.
return;
}
if((ReadReg(LSR) & LSR_TX_IDLE) == 0){
// the UART transmit holding register is full,
// so we cannot give it another byte.
// it will interrupt when it's ready for a new byte.
return;
}
int c = uart_tx_buf[uart_tx_r % UART_TX_BUF_SIZE];
uart_tx_r += 1;
// maybe uartputc() is waiting for space in the buffer.
wakeup(&uart_tx_r);
WriteReg(THR, c);
}
}
// read one input character from the UART.
// return -1 if none is waiting.
int
uartgetc(void)
{
if(ReadReg(LSR) & 0x01){
// input data is ready.
return ReadReg(RHR);
} else {
return -1;
}
}
// handle a uart interrupt, raised because input has
// arrived, or the uart is ready for more output, or
// both. called from devintr().
void
uartintr(void)
{
// read and process incoming characters.
while(1){
int c = uartgetc();
if(c == -1)
break;
consoleintr(c);
}
// send buffered characters.
acquire(&uart_tx_lock);
uartstart();
release(&uart_tx_lock);
}

View File

@ -7,8 +7,6 @@
// https://docs.oasis-open.org/virtio/virtio/v1.1/virtio-v1.1.pdf
//
#include "types.h"
// virtio mmio control registers, mapped starting at 0x10001000.
// from qemu virtio_mmio.h
#define VIRTIO_MMIO_MAGIC_VALUE 0x000 // 0x74726976

View File

@ -229,7 +229,7 @@ virtio_disk_rw(struct buf *b, int write)
if(alloc3_desc(idx) == 0) {
break;
}
sleep_lock(&disk.free[0], &disk.vdisk_lock);
sleep(&disk.free[0], &disk.vdisk_lock);
}
// format the three descriptors.
@ -282,7 +282,7 @@ virtio_disk_rw(struct buf *b, int write)
// Wait for virtio_disk_intr() to say request has finished.
while(b->disk == 1) {
sleep_lock(b, &disk.vdisk_lock);
sleep(b, &disk.vdisk_lock);
}
disk.info[idx[0]].b = 0;

439
kernel/vm.c Normal file
View File

@ -0,0 +1,439 @@
#include "param.h"
#include "types.h"
#include "memlayout.h"
#include "elf.h"
#include "riscv.h"
#include "defs.h"
#include "fs.h"
/*
* the kernel's page table.
*/
pagetable_t kernel_pagetable;
extern char etext[]; // kernel.ld sets this to end of kernel code.
extern char trampoline[]; // trampoline.S
// Make a direct-map page table for the kernel.
pagetable_t
kvmmake(void)
{
pagetable_t kpgtbl;
kpgtbl = (pagetable_t) kalloc();
memset(kpgtbl, 0, PGSIZE);
// uart registers
kvmmap(kpgtbl, UART0, UART0, PGSIZE, PTE_R | PTE_W);
// virtio mmio disk interface
kvmmap(kpgtbl, VIRTIO0, VIRTIO0, PGSIZE, PTE_R | PTE_W);
// PLIC
kvmmap(kpgtbl, PLIC, PLIC, 0x400000, PTE_R | PTE_W);
// map kernel text executable and read-only.
kvmmap(kpgtbl, KERNBASE, KERNBASE, (uint64)etext-KERNBASE, PTE_R | PTE_X);
// map kernel data and the physical RAM we'll make use of.
kvmmap(kpgtbl, (uint64)etext, (uint64)etext, PHYSTOP-(uint64)etext, PTE_R | PTE_W);
// map the trampoline for trap entry/exit to
// the highest virtual address in the kernel.
kvmmap(kpgtbl, TRAMPOLINE, (uint64)trampoline, PGSIZE, PTE_R | PTE_X);
// allocate and map a kernel stack for each process.
proc_mapstacks(kpgtbl);
return kpgtbl;
}
// Initialize the one kernel_pagetable
void
kvminit(void)
{
kernel_pagetable = kvmmake();
}
// Switch h/w page table register to the kernel's page table,
// and enable paging.
void
kvminithart()
{
// wait for any previous writes to the page table memory to finish.
sfence_vma();
w_satp(MAKE_SATP(kernel_pagetable));
// flush stale entries from the TLB.
sfence_vma();
}
// Return the address of the PTE in page table pagetable
// that corresponds to virtual address va. If alloc!=0,
// create any required page-table pages.
//
// The risc-v Sv39 scheme has three levels of page-table
// pages. A page-table page contains 512 64-bit PTEs.
// A 64-bit virtual address is split into five fields:
// 39..63 -- must be zero.
// 30..38 -- 9 bits of level-2 index.
// 21..29 -- 9 bits of level-1 index.
// 12..20 -- 9 bits of level-0 index.
// 0..11 -- 12 bits of byte offset within the page.
pte_t *
walk(pagetable_t pagetable, uint64 va, int alloc)
{
if(va >= MAXVA)
panic("walk");
for(int level = 2; level > 0; level--) {
pte_t *pte = &pagetable[PX(level, va)];
if(*pte & PTE_V) {
pagetable = (pagetable_t)PTE2PA(*pte);
} else {
if(!alloc || (pagetable = (pde_t*)kalloc()) == 0)
return 0;
memset(pagetable, 0, PGSIZE);
*pte = PA2PTE(pagetable) | PTE_V;
}
}
return &pagetable[PX(0, va)];
}
// Look up a virtual address, return the physical address,
// or 0 if not mapped.
// Can only be used to look up user pages.
uint64
walkaddr(pagetable_t pagetable, uint64 va)
{
pte_t *pte;
uint64 pa;
if(va >= MAXVA)
return 0;
pte = walk(pagetable, va, 0);
if(pte == 0)
return 0;
if((*pte & PTE_V) == 0)
return 0;
if((*pte & PTE_U) == 0)
return 0;
pa = PTE2PA(*pte);
return pa;
}
// add a mapping to the kernel page table.
// only used when booting.
// does not flush TLB or enable paging.
void
kvmmap(pagetable_t kpgtbl, uint64 va, uint64 pa, uint64 sz, int perm)
{
if(mappages(kpgtbl, va, sz, pa, perm) != 0)
panic("kvmmap");
}
// Create PTEs for virtual addresses starting at va that refer to
// physical addresses starting at pa. va and size might not
// be page-aligned. Returns 0 on success, -1 if walk() couldn't
// allocate a needed page-table page.
int
mappages(pagetable_t pagetable, uint64 va, uint64 size, uint64 pa, int perm)
{
uint64 a, last;
pte_t *pte;
if(size == 0)
panic("mappages: size");
a = PGROUNDDOWN(va);
last = PGROUNDDOWN(va + size - 1);
for(;;){
if((pte = walk(pagetable, a, 1)) == 0)
return -1;
if(*pte & PTE_V)
panic("mappages: remap");
*pte = PA2PTE(pa) | perm | PTE_V;
if(a == last)
break;
a += PGSIZE;
pa += PGSIZE;
}
return 0;
}
// Remove npages of mappings starting from va. va must be
// page-aligned. The mappings must exist.
// Optionally free the physical memory.
void
uvmunmap(pagetable_t pagetable, uint64 va, uint64 npages, int do_free)
{
uint64 a;
pte_t *pte;
if((va % PGSIZE) != 0)
panic("uvmunmap: not aligned");
for(a = va; a < va + npages*PGSIZE; a += PGSIZE){
if((pte = walk(pagetable, a, 0)) == 0)
panic("uvmunmap: walk");
if((*pte & PTE_V) == 0)
panic("uvmunmap: not mapped");
if(PTE_FLAGS(*pte) == PTE_V)
panic("uvmunmap: not a leaf");
if(do_free){
uint64 pa = PTE2PA(*pte);
kfree((void*)pa);
}
*pte = 0;
}
}
// create an empty user page table.
// returns 0 if out of memory.
pagetable_t
uvmcreate()
{
pagetable_t pagetable;
pagetable = (pagetable_t) kalloc();
if(pagetable == 0)
return 0;
memset(pagetable, 0, PGSIZE);
return pagetable;
}
// Load the user initcode into address 0 of pagetable,
// for the very first process.
// sz must be less than a page.
void
uvmfirst(pagetable_t pagetable, uchar *src, uint sz)
{
char *mem;
if(sz >= PGSIZE)
panic("uvmfirst: more than a page");
mem = kalloc();
memset(mem, 0, PGSIZE);
mappages(pagetable, 0, PGSIZE, (uint64)mem, PTE_W|PTE_R|PTE_X|PTE_U);
memmove(mem, src, sz);
}
// Allocate PTEs and physical memory to grow process from oldsz to
// newsz, which need not be page aligned. Returns new size or 0 on error.
uint64
uvmalloc(pagetable_t pagetable, uint64 oldsz, uint64 newsz, int xperm)
{
char *mem;
uint64 a;
if(newsz < oldsz)
return oldsz;
oldsz = PGROUNDUP(oldsz);
for(a = oldsz; a < newsz; a += PGSIZE){
mem = kalloc();
if(mem == 0){
uvmdealloc(pagetable, a, oldsz);
return 0;
}
memset(mem, 0, PGSIZE);
if(mappages(pagetable, a, PGSIZE, (uint64)mem, PTE_R|PTE_U|xperm) != 0){
kfree(mem);
uvmdealloc(pagetable, a, oldsz);
return 0;
}
}
return newsz;
}
// Deallocate user pages to bring the process size from oldsz to
// newsz. oldsz and newsz need not be page-aligned, nor does newsz
// need to be less than oldsz. oldsz can be larger than the actual
// process size. Returns the new process size.
uint64
uvmdealloc(pagetable_t pagetable, uint64 oldsz, uint64 newsz)
{
if(newsz >= oldsz)
return oldsz;
if(PGROUNDUP(newsz) < PGROUNDUP(oldsz)){
int npages = (PGROUNDUP(oldsz) - PGROUNDUP(newsz)) / PGSIZE;
uvmunmap(pagetable, PGROUNDUP(newsz), npages, 1);
}
return newsz;
}
// Recursively free page-table pages.
// All leaf mappings must already have been removed.
void
freewalk(pagetable_t pagetable)
{
// there are 2^9 = 512 PTEs in a page table.
for(int i = 0; i < 512; i++){
pte_t pte = pagetable[i];
if((pte & PTE_V) && (pte & (PTE_R|PTE_W|PTE_X)) == 0){
// this PTE points to a lower-level page table.
uint64 child = PTE2PA(pte);
freewalk((pagetable_t)child);
pagetable[i] = 0;
} else if(pte & PTE_V){
panic("freewalk: leaf");
}
}
kfree((void*)pagetable);
}
// Free user memory pages,
// then free page-table pages.
void
uvmfree(pagetable_t pagetable, uint64 sz)
{
if(sz > 0)
uvmunmap(pagetable, 0, PGROUNDUP(sz)/PGSIZE, 1);
freewalk(pagetable);
}
// Given a parent process's page table, copy
// its memory into a child's page table.
// Copies both the page table and the
// physical memory.
// returns 0 on success, -1 on failure.
// frees any allocated pages on failure.
int
uvmcopy(pagetable_t old, pagetable_t new, uint64 sz)
{
pte_t *pte;
uint64 pa, i;
uint flags;
char *mem;
for(i = 0; i < sz; i += PGSIZE){
if((pte = walk(old, i, 0)) == 0)
panic("uvmcopy: pte should exist");
if((*pte & PTE_V) == 0)
panic("uvmcopy: page not present");
pa = PTE2PA(*pte);
flags = PTE_FLAGS(*pte);
if((mem = kalloc()) == 0)
goto err;
memmove(mem, (char*)pa, PGSIZE);
if(mappages(new, i, PGSIZE, (uint64)mem, flags) != 0){
kfree(mem);
goto err;
}
}
return 0;
err:
uvmunmap(new, 0, i / PGSIZE, 1);
return -1;
}
// mark a PTE invalid for user access.
// used by exec for the user stack guard page.
void
uvmclear(pagetable_t pagetable, uint64 va)
{
pte_t *pte;
pte = walk(pagetable, va, 0);
if(pte == 0)
panic("uvmclear");
*pte &= ~PTE_U;
}
// Copy from kernel to user.
// Copy len bytes from src to virtual address dstva in a given page table.
// Return 0 on success, -1 on error.
int
copyout(pagetable_t pagetable, uint64 dstva, char *src, uint64 len)
{
uint64 n, va0, pa0;
while(len > 0){
va0 = PGROUNDDOWN(dstva);
pa0 = walkaddr(pagetable, va0);
if(pa0 == 0)
return -1;
n = PGSIZE - (dstva - va0);
if(n > len)
n = len;
memmove((void *)(pa0 + (dstva - va0)), src, n);
len -= n;
src += n;
dstva = va0 + PGSIZE;
}
return 0;
}
// Copy from user to kernel.
// Copy len bytes to dst from virtual address srcva in a given page table.
// Return 0 on success, -1 on error.
int
copyin(pagetable_t pagetable, char *dst, uint64 srcva, uint64 len)
{
uint64 n, va0, pa0;
while(len > 0){
va0 = PGROUNDDOWN(srcva);
pa0 = walkaddr(pagetable, va0);
if(pa0 == 0)
return -1;
n = PGSIZE - (srcva - va0);
if(n > len)
n = len;
memmove(dst, (void *)(pa0 + (srcva - va0)), n);
len -= n;
dst += n;
srcva = va0 + PGSIZE;
}
return 0;
}
// Copy a null-terminated string from user to kernel.
// Copy bytes to dst from virtual address srcva in a given page table,
// until a '\0', or max.
// Return 0 on success, -1 on error.
int
copyinstr(pagetable_t pagetable, char *dst, uint64 srcva, uint64 max)
{
uint64 n, va0, pa0;
int got_null = 0;
while(got_null == 0 && max > 0){
va0 = PGROUNDDOWN(srcva);
pa0 = walkaddr(pagetable, va0);
if(pa0 == 0)
return -1;
n = PGSIZE - (srcva - va0);
if(n > max)
n = max;
char *p = (char *) (pa0 + (srcva - va0));
while(n > 0){
if(*p == '\0'){
*dst = '\0';
got_null = 1;
break;
} else {
*dst = *p;
}
--n;
--max;
p++;
dst++;
}
srcva = va0 + PGSIZE;
}
if(got_null){
return 0;
} else {
return -1;
}
}

View File

@ -1,9 +0,0 @@
K=../kernel
.PHONY: clean
mkfs: mkfs.c $K/fs.h $K/param.h
gcc -Werror -Wall -I. -o mkfs mkfs.c
clean:
rm -f mkfs

View File

@ -6,10 +6,10 @@
#include <assert.h>
#define stat xv6_stat // avoid clash with host struct stat
#include "../kernel/types.h"
#include "../kernel/fs.h"
#include "../kernel/stat.h"
#include "../kernel/param.h"
#include "kernel/types.h"
#include "kernel/fs.h"
#include "kernel/stat.h"
#include "kernel/param.h"
#ifndef static_assert
#define static_assert(a, b) do { switch (0) case 0: case (a): ; } while (0)
@ -132,8 +132,6 @@ main(int argc, char *argv[])
char *shortname;
if(strncmp(argv[i], "user/", 5) == 0)
shortname = argv[i] + 5;
else if(strncmp(argv[i], "programs/", 9) == 0)
shortname = argv[i] + 9;
else
shortname = argv[i];

View File

@ -1,8 +0,0 @@
#include "kernel/types.h"
#include "kernel/stat.h"
#include "user/user.h"
int main(int argc, char *argv[]) {
write(1, "\033[2J\033[H", 7);
exit(0);
}

View File

@ -1,10 +0,0 @@
#include "kernel/types.h"
#include "kernel/stat.h"
#include "user/user.h"
int main(int argc, char *argv[])
{
shutdown();
// Unreachable
exit(1);
}

Some files were not shown because too many files have changed in this diff Show More