Commit Graph

119 Commits

Author SHA1 Message Date
Andrew Boie
348d1315d2 x86: 32-bit: restore virtual linking capability
This reverts commit 7d32e9f9a5.

We now allow the kernel to be linked virtually. This patch:

- Properly converts between virtual/physical addresses
- Handles early boot instruction pointer transition
- Double-maps SRAM to both virtual and physical locations
  in boot page tables to facilitate instruction pointer
  transition, with logic to clean this up after completed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung
c0ee8c4a43 x86: use z_bss_zero and z_data_copy
Instead of doing these in assembly, use the common z_bss_zero()
and z_data_copy() C functions instead. This simplifies code
a bit and we won't miss any additions to these two functions
(if any) under x86 in the future (as x86_64 was actually not
clearing gcov bss area).

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-21 22:28:33 -05:00
Daniel Leung
dd98de880a x86: move calling z_loapic_enable into z_x86_prep_c
This moves calling z_loapic_enable() from crt0.S into
z_x86_prep_c(). This is done so we can move BSS clearing
and data section copying inside z_x86_prep_c() as
these are needed before calling z_loapic_enable().

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-21 22:28:33 -05:00
Andy Ross
cce5ff1510 arch/x86: Fix stack alignment for user threads
The x86_64 SysV ABI requires 16 byte alignment for the stack pointer
during execution of normal code.  That means that on entry to an
ABI-compatible C function (which is reached via a CALL instruction
that pushes the return address) the RSP register must be MISaligned by
exactly 8 bytes.  The kernel mode thread setup got this right, but we
missed the equivalent condition in userspace entry.

The end result was a misaligned stack, which is surprisingly robust
for most use.  But recent toolchains have starting doing some more
elaborate vectorization, and the resulting SSE instructions started
failing in userspace on the misaliged loads.

Note that there's a comment about optimization: we're doing the stack
alignment in the "wrong place" and are needlessly wasting bytes in
some cases.  We should see the raw stack boundaries where we are
setting up RSP values.  Add a FIXME to this effect, but don't touch
anything as this patch is a targeted bugfix.

Also fix a somewhat embarassing 32-bit-ism that would have truncated
the address of a userspace stack that we tried to put above 4G.

Fixes #31018

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-03 18:45:48 -05:00
Daniel Leung
dea8fccfb3 x86: clear GS at boot for x86_64
On Intel processors, if GS is not zero and is being set to
zero, GS_BASE is also being set to zero. This would interfere
with the actual use of GS_BASE for usespace. To avoid accidentally
clearing GS_BASE, simply set GS to 0 at boot, so any subsequent
clearing of GS will not clear GS_BASE.

The clearing of GS_BASE was discovered while trying to figure out
why the mem_protect test would hang within 10-20 repeated runs.
GDB revealed that both GS and GS_BASE was set to zero when the tests
hanged. After setting GS to zero at boot, the mem_protect tests
were running repeated for 5,000+ times without hanging.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-01 21:38:28 -05:00
Daniel Leung
d3218ca515 debug: coredump: remove z_ prefix for stuff used outside subsys
This removes the z_ prefix those (functions, enums, etc.) that
are being used outside the coredump subsys. This aligns better
with the naming convention.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-21 22:08:59 -05:00
Andrew Boie
0a791b7a09 x86: mmu: clarify physical/virtual conversions
The page table implementation requires conversion between virtual
and physical addresses when creating and walking page tables. Add
a phys_addr() and virt_addr() functions instead of hard-casting
these values, plus a macro for doing the same in ASM code.

Currently, all pages are identity mapped so VIRT_OFFSET = 0, but
this will now still work if they are not the same.

ASM language was also updated for 32-bit. Comments were left in
64-bit, as long mode semantics don't allow use of Z_X86_PHYS_ADDR
macro; this can be revisited later.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-12-15 14:16:51 -05:00
Andrei Emeltchenko
34573803a8 arch: x86_64: Rename _exception_stack to z_x86_exception_stack
Rename stack name according to MISRA-C standard.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2020-12-10 17:06:17 +02:00
Andrei Emeltchenko
a4dbb51e74 arch: x86_64: Rename _nmi_stack to z_x86_nmi_stack
Rename stack name according to MISRA-C standard.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2020-12-10 17:06:17 +02:00
Andrei Emeltchenko
ddc5139a6e arch: x86_64: Trivial correction
Correct register name in comment.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2020-12-10 17:06:17 +02:00
Andrei Emeltchenko
e7d3dd1362 arch: x86_64: Using right exception stack with KPTI
With kernel page table isolation (KPTI) we cannot use right exception
stack since after using trampoline stack there was always switch to
7th IST stack (__x86_tss64_t_ist7_OFFSET). Make this configurable as a
parameter in EXCEPT(nr, ist) and EXCEPT_CODE(nr, ist). For the NMI we
would use ist6 (_nmi_stack).

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2020-12-10 17:06:17 +02:00
Andrei Emeltchenko
85db42883f arch/x86: Use NMI stack for NMIs
NMI can be triggered at any time, even when in the process of
switching stacks. Use special stack for it.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2020-12-10 17:06:17 +02:00
Andrei Emeltchenko
8db06aee69 arch/x86: Add NMI registration API
Add simple NMI registration API.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2020-12-10 17:06:17 +02:00
Tomasz Bursztyka
abf65b5e65 arch/x86: Generalize dynamic irq connection on given vector
This will be used by MSI multi-vector implementation to connect the irq
and the vector prior to allocation.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-12-08 09:29:20 -05:00
Tomasz Bursztyka
bd57e4cf12 arch/x86: Generalize the vector allocator
This will be used by MSI/MSI-X when multi-vector is requested.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-12-08 09:29:20 -05:00
Krzysztof Chruscinski
3ed8083dc1 kernel: Cleanup logger setup in kernel files
Most of kernel files where declaring os module without providing
log level. Because of that default log level was used instead of
CONFIG_KERNEL_LOG_LEVEL.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2020-11-27 09:56:34 -05:00
Daniel Leung
11e6b43090 tracing: roll thread switch in/out into thread stats functions
Since the tracing of thread being switched in/out has the same
instrumentation points, we can roll the tracing function calls
into the one for thread stats gathering functions.
This avoids duplicating code to call another function.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-11-11 23:55:49 -05:00
Andrew Boie
b8242bff64 x86: move page tables to the memory domain level
- z_x86_userspace_enter() for both 32-bit and 64-bit now
  call into C code to clear the stack buffer and set the
  US bits in the page tables for the memory range.

- Page tables are now associated with memory domains,
  instead of having separate page tables per thread.
  A spinlock protects write access to these page tables,
  and read/write access to the list of active page
  tables.

- arch_mem_domain_init() implemented, allocating and
  copying page tables from the boot page tables.

- struct arch_mem_domain defined for x86. It has
  a page table link and also a list node for iterating
  over them.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-11-05 09:33:40 -05:00
Andrew Boie
d31f62a955 x86: smp: add TLB shootdown logic
This will be needed when we support memory un-mapping, or
the same user mode page tables on multiple CPUs. Neither
are implemented yet.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-11-05 09:33:40 -05:00
Andrew Boie
edc5e31d6b x86_64: fix RBX clobber in nested IRQ case
In the code path for nested interrupts, we are not saving
RBX, yet the assembly code is using it as a storage location
for the ISR.

Use RAX. It is backed up in both the nested and non-nested
cases, and the ASM code is not currently using it at that
point.

Fixes: #29594

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-10-28 10:29:32 -07:00
Daniel Leung
53ac1ee6fa x86_64: add support for thread local storage
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-10-24 10:52:00 -07:00
Tomasz Bursztyka
2b6c574499 arch/x86: If ACPI is enabled, get the CPU APIC ID from it
The hardcoded APIC ID will be kept as default if the CPU is not found in
ACPI MADT.

Note that ACPI may expose more "CPUs" than there actually are
physically. Thus, make the logic aware of this possibility by checking
the enabled flas. (Non-enabled CPU are ignored).

This fixes up_squared board made of Celeron CPU.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-10-01 11:16:40 -07:00
Andrew Boie
a696f8ba27 x86: rename CONFIG_EXCEPTION_STACKS_SIZE
This is an x86-specific definition.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-28 13:38:41 -07:00
Andrew Boie
61d42cf42b x86-64: fix thread tracing
The current instrumentation point for CONFIG_TRACING added in
PR #28512 had two problems:

- If userspace and KPTI are enabled, the tracing point is simply
  never run if we are resuming a user thread as the
  z_x86_trampoline_to_user function is jumped to and calls
  'iret' from there

- Only %rdi is being saved. However, at that location, *all*
  caller-saved registers are in use as they contain the
  resumed thread's context

Simplest solution is to move this up near where we update page
tables. The #ifdefs are used to make sure we don't push/pop
%rdi more than once. At that point in the code only %rdi
is in use among the volatile registers.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-22 20:47:48 -04:00
Anas Nashif
75d159bf0d tracing: x86_64: move switched_in to switch function
Tracing switched in threads in C code does not work, it needs to happen
in the arch_switch code. See also Xtensa and ARC.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-09-20 21:27:55 -04:00
Kumar Gala
283a057279 x86_64: Fix memory access size for locore EOI
Newer QEMU (5.1) hangs / timeouts on a number of tests on x86_64.  In
debugging the issue this is related to a fix in QEMU 5.1 that
validates memory region access.  QEMU has the APIC region only allowing
1 to 4 byte access.  64-bit access is treated as an error.

Change the APIC EOI access in locore.S back to just doing a 32-bit
access.

Fixes #	28453

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-09-18 13:29:00 -05:00
Daniel Leung
43b2e291db x86_64: fix size to init stack at boot
The boot code of x86_64 initializes the stack (if enabled)
with a hard-coded size for the ISR stack. However,
the stack being used does not have to be the ISR stack,
and can be any defined stacks. So pass in the actual size
of the stack so the stack can be initialized properly.

Fixes #21843

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-09-17 21:05:45 -04:00
Anas Nashif
6e27478c3d benchmarking: remove execution benchmarking code
This code had one purpose only, feed timing information into a test and
was not used by anything else. The custom trace points unfortunatly were
not accurate and this test was delivering informatin that conflicted
with other tests we have due to placement of such trace points in the
architecture and kernel code.

For such measurements we are planning to use the tracing functionality
in a special mode that would be used for metrics without polluting the
architecture and kernel code with additional tracing and timing code.

Furthermore, much of the assembly code used had issues.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-09-05 13:28:38 -05:00
Andrew Boie
7d32e9f9a5 mmu: support only identity RAM mapping
We no longer plan to support a split address space with
the kernel in high memory and per-process address spaces.
Because of this, we can simplify some things. System RAM
is now always identity mapped at boot.

We no longer require any virtual-to-physical translation
for page tables, and can remove the dual-mapping logic
from the page table generation script since we won't need
to transition the instruction point off of physical
addresses.

CONFIG_KERNEL_VM_BASE and CONFIG_KERNEL_VM_LIMIT
have been removed. The kernel's address space always
starts at CONFIG_SRAM_BASE_ADDRESS, of a fixed size
specified by CONFIG_KERNEL_VM_SIZE.

Driver MMIOs and other uses of k_mem_map() are still
virtually mapped, and the later introduction of demand
paging will result in only a subset of system RAM being
a fixed identity mapping instead of all of it.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-03 14:24:38 -04:00
Tomasz Bursztyka
93cd336204 arch: Apply dynamic IRQ API change
Switching to constant parameter.

Fixes #27399

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-09-02 13:48:13 +02:00
Tomasz Bursztyka
7def6eeaee arch: Apply IRQ offload API change
Switching to constant parameter.

Fixes #27399

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-09-02 13:48:13 +02:00
Andrew Boie
38e17b68e3 x86: paging code rewrite
The x86 paging code has been rewritten to support another paging mode
and non-identity virtual mappings.

 - Paging code now uses an array of paging level characteristics and
   walks tables using for loops. This is opposed to having different
   functions for every paging level and lots of #ifdefs. The code is
   now more concise and adding new paging modes should be trivial.

 - We now support 32-bit, PAE, and IA-32e page tables.

 - The page tables created by gen_mmu.py are now installed at early
   boot. There are no longer separate "flat" page tables. These tables
   are mutable at any time.

 - The x86_mmu code now has a private header. Many definitions that did
   not need to be in public scope have been moved out of mmustructs.h
   and either placed in the C file or in the private header.

 - Improvements to dumping page table information, with the physical
   mapping and flags all shown

 - arch_mem_map() implemented

 - x86 userspace/memory domain code ported to use the new
   infrastructure.

 - add logic for physical -> virtual instruction pointer transition,
   including cleaning up identity mappings after this takes place.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-25 15:49:59 -04:00
Andrew Boie
ddb63c404f x86_64: fix sendling locore EOI
The address was being truncated because we were using
32-bit registers. CONFIG_MMU is always enabled on 64-bit,
remove the #ifdef.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-25 15:49:59 -04:00
Daniel Leung
8fbb14ef50 coredump: add support for x86 and x86_64
This adds the necessary bits to enable coredump for x86
and x86_64.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-08-24 20:28:24 -04:00
Andrew Boie
8b4b0d6264 kernel: z_interrupt_stacks are now kernel stacks
This will save memory on many platforms that enable
user mode.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
8ce260d8df kernel: introduce supervisor-only stacks
These stacks are appropriate for threads that run purely in
supervisor mode, and also as stacks for interrupt and exception
handling.

Two new arch defines are introduced:

- ARCH_KERNEL_STACK_GUARD_SIZE
- ARCH_KERNEL_STACK_OBJ_ALIGN

New public declaration macros:

- K_KERNEL_STACK_RESERVED
- K_KERNEL_STACK_EXTERN
- K_KERNEL_STACK_DEFINE
- K_KERNEL_STACK_ARRAY_DEFINE
- K_KERNEL_STACK_MEMBER
- K_KERNEL_STACK_SIZEOF

If user mode is not enabled, K_KERNEL_STACK_* and K_THREAD_STACK_*
are equivalent.

Separately generated privilege elevation stacks are now declared
like kernel stacks, removing the need for K_PRIVILEGE_STACK_ALIGN.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
b0c155f3ca kernel: overhaul stack specification
The core kernel computes the initial stack pointer
for a thread, properly aligning it and subtracting out
any random offsets or thread-local storage areas.
arch_new_thread() no longer needs to make any calculations,
an initial stack frame may be placed at the bounds of
the new 'stack_ptr' parameter passed in. This parameter
replaces 'stack_size'.

thread->stack_info is now set before arch_new_thread()
is invoked, z_new_thread_init() has been removed.
The values populated may need to be adjusted on arches
which carve-out MPU guard space from the actual stack
buffer.

thread->stack_info now has a new member 'delta' which
indicates any offset applied for TLS or random offset.
It's used so the calculations don't need to be repeated
if the thread later drops to user mode.

CONFIG_INIT_STACKS logic is now performed inside
z_setup_new_thread(), before arch_new_thread() is called.

thread->stack_info is now defined as the canonical
user-accessible area within the stack object, including
random offsets and TLS. It will never include any
carved-out memory for MPU guards and must be updated at
runtime if guards are removed.

Available stack space is now optimized. Some arches may
need to significantly round up the buffer size to account
for page-level granularity or MPU power-of-two requirements.
This space is now accounted for and used by virtue of
the Z_THREAD_STACK_SIZE_ADJUST() call in z_setup_new_thread.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
24825c8667 arches: fix arch_new_thread param names
MISRA-C wants the parameter names in a function implementaion
to match the names used by the header prototype.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
62eb7d99dc arch_interface: remove unnecessary params
arch_new_thread() passes along the thread priority and option
flags, but these are already initialized in thread->base and
can be accessed there if needed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
ee3c50ba6d x86: apic: use device MMIO APIs
A hack was required for the loapic code due to the address
range not being in DTS. A bug was filed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-17 11:38:18 +02:00
Andy Ross
c7d76cbe58 arch/x86: Add a spurious interrupt handler to x86_64
Right now x86_64 doesn't install handlers for vectors that aren't
populated by Zephyr code.  Add a tiny spurious interrupt handler that
logs the error and triggers a fatal error, like other platforms do.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-07-08 12:34:09 +02:00
Andy Ross
36b8db0129 arch/x86: Map 512G by default when booting x86_64
The default page table (the architecturally required one used for
entrance to long mode, before the OS page tables get assembled) was
mapping the first 4G of memory.

Extend this to 512G by fully populating the second level page table.
We have devices now (up_squared) which have real RAM mapped above 4G.
There's really no good reason not to do this, the page is present
always anyway.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-07-08 12:34:09 +02:00
Andy Ross
4d5e67ed13 arch/x86: Unbreak SMP startup on x86_64
A last minute "cleanup" to the EFI startup path (on a system where I
had SMP disabled) moved the load of the x86_cpuboot[0] entry into RBP
into the main startup code, which is wrong because on auxiliary CPUs
that's already set up by the 16/32 bit entry code to point to the
OTHER entries.

Put it back where it belongs.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-07-07 12:59:33 -04:00
Andy Ross
928d31125f arch/x86: Add zefi, an EFI stub/packer/wrappper/loader
This is a first cut on a tool that will convert a built Zephyr ELF
file into an EFI applciation suitable for launching directly from the
firmware of a UEFI-capable device, without the need for an external
bootloader.

It works by including the Zephyr sections into the EFI binary as
blobs, then copying them into place on startup.

Currently, it is not integrated in the build.  Right now you have to
build an image for your target (up_squared has been tested) and then
pass the resulting zephyr.elf file as an argument to the
arch/x86/zefi/zefi.py script.  It will produce a "zephyr.efi" file in
the current directory.

This involved a little surgery in x86_64 to copy over some setup that
was previously being done in 32 bit mode to a new EFI entry point.
There is no support for 32 bit UEFI targets for toolchain reasons.

See the README for more details.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-07-02 09:10:01 -04:00
Kumar Gala
a1b77fd589 zephyr: replace zephyr integer types with C99 types
git grep -l 'u\(8\|16\|32\|64\)_t' | \
		xargs sed -i "s/u\(8\|16\|32\|64\)_t/uint\1_t/g"
	git grep -l 's\(8\|16\|32\|64\)_t' | \
		xargs sed -i "s/s\(8\|16\|32\|64\)_t/int\1_t/g"

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-06-08 08:23:57 -05:00
Daniel Leung
251cb61e20 x86_64: instrument code for timing information
On x86_64, the arch_timing_* variables are not set which
results in incorrect values being used in the timing_info
benchmarks. So instrument the code for those values.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-05-20 22:36:04 +02:00
Zide Chen
d27f6cb5eb interrupt_controller: program local APIC LDR register for xAPIC
If IO APIC is in logical destination mode, local APICs compare their
logical APIC ID defined in LDR (Logical Destination Register) with
the destination code sent with the interrupt to determine whether or not
to accept the incoming interrupt.

This patch programs LDR in xAPIC mode to support IO APIC logical mode.

The local APIC ID from local APIC ID register can't be used as the
'logical APIC ID' because LAPIC ID may not be consecutive numbers hence
it makes it impossible for LDR to encode 8 IDs within 8 bits.

This patch chooses 0 for BSP, and for APs, cpu_number which is the index
to x86_cpuboot[], which ultimately assigned in z_smp_init[].

Signed-off-by: Zide Chen <zide.chen@intel.com>
2020-05-08 22:32:39 -04:00
Andrew Boie
0091a700d3 x86_64: fix crash on nested interrupts
x86_64 supports 4 levels of interrupt nesting, with
the interrupt stack divided up into sub-stacks for
each nesting level.

Unfortunately, the initial interrupt stack pointer
on the first CPU was not taking into account reserved
space for guard areas, causing a stack overflow exception
when attempting to use the last interrupt nesting level,
as that page had been set up as a stack guard.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-05-01 11:44:05 -07:00
Andrew Boie
1f6f977f05 kernel: centralize new thread priority check
This was being done inconsistently in arch_new_thread(), just
move to the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-04-21 18:45:45 -04:00
Andrew Boie
c0df99cc77 kernel: reduce scope of z_new_thread_init()
The core kernel z_setup_new_thread() calls into arch_new_thread(),
which calls back into the core kernel via z_new_thread_init().

Move everything that doesn't have to be in z_new_thread_init() to
z_setup_new_thread() and convert to an inline function.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-04-21 18:45:45 -04:00