News:

MASM32 SDK Description, downloads and other helpful links
MASM32.com New Forum Link
masmforum WebSite

When to use CLI and when not.

Started by Dinosaur, July 01, 2005, 12:21:40 AM

Previous topic - Next topic

Dinosaur

Hi all

I am having a program hang and suspect it is related to clearing and setting the interupt flag incorrectly.
As an example the code below shows how I have intercepted the keyboard interupt, to increment a flag in my main code.
Basically it prevents me from repeatedly checking for keyboard input.
I have the same scenario with the timer tick and touch screen.

At the same time I have an adc that raises IRQ10 2000 times per second.
If during the handling of the Keyboard interupt an IRQ10 occurs, it gets ignored.
But if I miss this IRQ, then I have to read it "manually" to allow it to keep raising IRQ10.

So if I use cli and sti when it is not required, (or wrong) and I dont know that an IRQ10 has been missed then
the whole thing grinds to a halt.

Therefore my question to the group is this.
If my keyboard handler clears and sets the interupt flag, and I then pass control to the old keyboard handler
what does it do with the interupt flag. Does it clear it again and when finished start it. ?
Looking at my code, does it even need to be cleared.?

   .MODEL  MEDIUM
   .486
   .CODE
   ;-------------------------
   PUBLIC   TSRKbd
   OldHandler2      DWORD   ?
   VarblSeg2      DW      ?
   VarblOfs2      DW      ?
   ;-------------------------

TSRKbd      PROC   FAR
   ;------------
   PUSH    BP
   MOV     BP,SP
   ;------------
   push   ds                  ;for return to basic
   push   cs                  ;
   pop      ds               ;set to current CS
   ;------------
   MOV      BX,[BP+6]               ;segment of basic variable
   CMP      BX,0               ;if zero then program is terminating
   JZ      Restore2               ;so restore old handler.
   MOV      VarblSeg2,BX            ;
   MOV      BX,[BP+10]            ;offset of basic variable                                           
   MOV      VarblOfs2,BX            ;
   ;-----------------------------------
   mov      ax, 3509h               ;get function 35
   int     21h                     ;get vector for keyboard
   mov     WORD PTR OldHandler2[0], bx                                                   ;save addr of original handler
   mov     WORD PTR OldHandler2[2], es   
   mov     ax, 2509h                               ;function 25h
   mov     dx, OFFSET TsrKeys                                         ;new handler addr
   int     21h                     ;set vector with addr of new handler
   ;----------------------------------   
   jmp      Dos_Exit
Restore2:
    lds     dx,OldHandler2
    mov      ax,2509h
    int      21h
   jmp      Dos_Exit
   
   
TsrKeys      PROC   FAR
    cli
   ;---------
   PUSH   DS
   PUSH    BX
   ;---------------
   mov      ds,VarblSeg2            ;set seg to Basic's variable
   mov      bx,VarblOfs2            ;set ofs "   "         "
    inc      dword ptr [bx]                                            ;inc the variable
   ;---------------
   POP     BX
   POP      DS
   ;*********
   sti
   jmp      cs:OldHandler2   
TsrKeys      ENDP
;---------------------------------------
Dos_Exit:
   pop      ds               ;restore basic's DS
   POP      BP               ;
   ret      8               ;clean up
TSRKbd      ENDP
         END


I always spend a lot of time aligning my quotes and comments, but whenever I post them they turn to Sh*t. :'(

Regards
Dinosaur

Bieb

Well, I'm not sure about your problem, but your code boxes are messed up.  I think you forgot to put a / in front of the closing
Quote
.  That should make your code display properly.

Phil

I don't think you need to worry about either CLI or STI since you are always calling the original keyboard handler after you pre-process the interrupt.

I think any problems that you may be seeing might be caused because your program is making a TSR request after it installs your handler. That is very important because your code would be replaced by the next program that gets loaded into memory by DOS unless you make a special request to terminate, but stay resident. I might be mistaken, but I don't think I saw that in your code anywhere. I can see that you have the procedures named to indicate that effect but without making the special call ... Dos just won't know!

This is going back a few months for me but I have included two small programs that let you see how many times the keyboard service interrupt is called. The first, TSRCOUNT, either installs or removes the keyboard interrupt service. The second, KEYCOUNT, returns the value of the counter which is incremented each time the interrupt is processed when TSRCOUNT is installed and, otherwise, returns a bogus value for the count if TSRCOUNT is not installed. The source is for the A86 assembler that is freely available on the net but I have also included the .com and .lst files so you can test it directly if you like. It's *always* a bad idea to run someone elses .com file directly but you could check them out with DEBUG and compare them with the .lst files to make sure you trust them.

Hope this helps a little bit. They are fairly simple and similar to what you are wanting to do so I thought I'd pass them along. The only other thing that I should mention is that your interrupt service might be called when other tasks are scheduled so you have to take care to remember the segment where the variable that you are incrementing lives. Otherwise, you could be touching variables in the wrong segment ... or causing GPF's!


[attachment deleted by admin]

Dinosaur

Thanks Phil.

My code is always written with exclusivity in mind, something I didnt mention.
My program is in complete charge of the computer, nothing else will get loaded by dos
once I take control. The cpu board is actually part of a machine that we build.
Other than Command.com no other drivers are used, with the exeption of Unreal which allows me to addres memory linearly. (unsegmented)

Also, my one and only call is to replace the original handler addr with the new handler addr.
After that I dont call that routine again untill I quit the program to restore the original addr.

This process is repeated for the Timer Tick and Touch Screen.
I suspected that someone may pick up on the amount of interupts per second.
ie: 18 for TimerTick, 2000 for my adc etc.
Amazingly this 15 module program that is performing a huge number of tasks is only taking 47uSec per loop,
on a 133mhz 486.

I have actually solved the Hang problem by inserting CLI and STI in a memory clearing routine.
But I still dont understand the rules about using them.


Robert
QuoteI think you forgot to put a / in front of the closing Quote

On the old site we had a close quote button, so guess I am a slow learner.

Regards
Dinosaur


Gustav


the CLI in the TSRKbd routine is redundant because when an interrupt handler in real-mode is called, interrupt flag is cleared already (interrupts disabled).

the STI in TSRKbd is kind of error, because the code in OldHandler2 can expect the IF being cleared, and in fact it is not.


Dinosaur

So, are you saying that when an interupt is detected by DOS, it already clears the Interupt Flag,
before I even get to the handler ?
Therefore if I have a lot of work to do to handle the interupt, but want to finish my current task,
then the Interupt flag is cleared all this time.(thats not what I do, but just a theoretical question)


That still leaves the question,
In Real mode in DOS, when am I allowed /supposed to use cli /sti ?

In the past I have used it only when bit-bashing an adc read, where an interupt would stuff
up my bit timing.

Regards

MichaelW

#6
AFAIK the BIOS IRQ1 handler will STI as soon as possible to avoid blocking IRQ0. This is normal practice for interrupt handlers. The handler must read the keystroke data from the keyboard controller and then essentially reprogram it in preparation for the next key, and the controller is relatively slow, so I would expect the service time to be relatively long. The attachment contains a program that is supposed to measure the time required to service IRQ1. The program uses system timer 2, which (assuming a 1,193,182Hz input clock) has a resolution of 0.838 microsecond and will overflow after 54,925 microseconds. Under Windows 2000 the timer overflows for all keystrokes other than an Enter immediately following a Pause/Break, for which the time is in the 54-55ms range. I am measuring this same range for all keystrokes on an old Pentium MMX system running under Windows 98 MS-DOS mode. I'm not certain that these times are correct, but I think it very likely that the service time would be longer than the 0.5ms period of your adc interrupt.

The only solution I can see would be to use a higher priority interrupt for your adc. IRQ0 would be one possibility, if available. I think the handler could probably separate the adc interrupts from the timer 0 interrupts by checking the status of timer 0. Another possibility, and the only possibility available to an ISA card, would be the NMI interrupt. When an ISA card asserts Channel Check (CHCHK#), if NMI interrupts are enabled (bit7 of I/O port 70h is clear) and recognition of Channel Check is enabled (bit3 of I/O port 61h is clear), a NMI interrupt is issued to the processor. The processor associates entry 2 in the interrupt vector table with the NMI. To verify that the NMI was triggered by Channel Check, the handler can check that bit6 of I/O port 61h is set.

In real address mode the processor responds to an interrupt by pushing the flags, clearing the interrupt, trap, and AC flags, pushing CS, pushing IP, and then loading CS and IP from the interrupt vector table. The IRET instruction pops the stack, leaving the flags in the state they were in before the interrupt occurred.


[attachment deleted by admin]
eschew obfuscation

Dinosaur

Sounds like the conclusion is that I dont need to cli if I am chaining to the existing interupt handler.

Michael, your timing statements have me confused and now doubting my speed calculations.

The keyboard handler usually does not get used more then 2 or 3 times a day.
The Timer tick every 55 msec and
the adc every 0.5 msec

I have chained to the timer tick to increment a variable.
Then I count the number of program loops that elapse between timer ticks.
Divide 54945.0549 / Loops = usec for each program loop. (including printing the result to the screen)
Typically this is 47 or 48 microSec per loop.?

My adc is actually connected to a Xilinx cpld, so for me to get the results
takes two isa bus accesses. @ 14 clk's each for say an 8mhz bus = 0.0000035 sec = 3.5 microsec's
So 2000 of these should take at least 7msec.??
Is that 14 clk's of the CPU at 133mhz or 14 clk's of isa bus speed.?

Regards

MichaelW

#8
I see nothing wrong with your timing method, but your numbers seem to me to be a little off. In my references the clock frequency for the system timer is given as 1,193,182 or 1,193,180 or 1,193,200. Assuming an initial count of 0 for system timer 0, the timer would count 65,536 cycles per tick, and 65536 / 1,193,182  = 0.054925401 seconds = 54925.40 microseconds per tick.

I made a stupid error in my timing code that caused it to ignore the MSB of the count, so the results were meaningless. Using the corrected code, running on a 500 MHz P3 under Windows 2000, the timer overflows for all keystrokes other than an Enter immediately following a Pause/Break, for which the time is ~100 microseconds. Running on a 166 MHz PMMX under Windows 98 MS-DOS mode I get 110-125 microseconds for most of the keys, and ~8000 (!?) for the toggle keys. So if these numbers are correct, I would guess that a 133MHz 486 would be able to handle most keystrokes in something like 200 microseconds. I have corrected the original attachment.

For the normal 8MHz ISA bus each clock cycle is 125ns in duration. A 0-wait state bus cycle spans two clock cycles, so a single 0-wait state transfer would require 250ns. A standard 8-bit bus cycle includes 4-wait states, so each transfer would require 750ns, but this can be shortened to a 1-wait state cycle where each transfer would require 375ns. A standard 16-bit bus cycle includes 1-wait state, so each transfer would require 375ns. According to one of my references, a 0-wait state 16-bit bus cycle is possible for memory devices only.

You refer to basic in your code, what basic are you using?

eschew obfuscation

Dinosaur

#9
Michael

I use pds7.1

I am glad you have corrected me on the value of each tick.
Stupidly, I simply divided 1000msec / 18.2
Below the (corrected) snippet of code.Times.NowTime is updated by the Tick

      Times.PassCount  = Times.PassCount  + 1                                                                    'increment the program loops      
      Times.Msec = clng((Times.NowTime  * 54.9254) + (Times.Increment \ 1000 ))   
      '-----------------------------------------------
      IF Times.NowTime  > Times.OldTime  THEN                                                                'if a tick has happened ,recalculate
         Times.Increment  = (54.9254 / Times.PassCount) * 1000                                           '55 msec / Nr of Passes = msec per pass(make it uSec)
         Times.OldTime    = Times.NowTime    
         Times.PassCount  = 0         
      END IF


Looking at your ISA i/o timing, leads me to conclude:
Using 2 IN AL,DX statements and your time value.(must have a look at the BIOS and see the Wait states)
375nSEc x 2 = 750nSec x 2000 = 1500uSec just for reading 16 bits on the bus in 1 sec.

Looking at it another way,
The IRQ occurs precicely every 500uSec
If 1 loop takes 47 to 48 uSec then there is one 750nsec ISA access every 10 loops.
So every 10th loop takes 750nsec longer. I am never going to see that.

PS.
My I/O boards have a 8254 timer on board, and I have a lot of experience driving them for liquid pulse counting,
clock dividers, interrupt on terminal count etc.

I will try your Timing routine again.
Edit: Tried it and had to press Ctrl/Break twice to get it to respond pause ?
anyhow 34 uSec on 133Mhz Industrial CPU.

However, nobody is replying on the rules of when and when not to use CLI & STI.
Although I have solved my problem, I solved it by trial and error, not by knowing the rules.

Regards
Dinosaur

MichaelW

I can't recall ever seeing any specific rules regarding when and when not to use CLI and STI in an interrupt handler. AFAIK the rules are basically the same as for any real-mode program -- hardware interrupts must be disabled while performing tasks that cannot be interrupted (switching stacks, for example), but they should otherwise be enabled. The workable time limit for keeping interrupts disabled would depend on what else was running on the system and the maximum interrupt rate for any hardware interrupt in use.

One thing that is unique to a hardware interrupt handler is the need to issue an EOI to the interrupt controller. Until this is done the controller will inhibit requests from all but higher-priority devices.


eschew obfuscation

dioxin

Dinosaur,
   the rule I use for setting/clearing interrupt flags is ..
   Only ever use CLI/STI outside of an interrupt routine and then only at the critical times when you really must not be interrupted, such as when updating an interrupt vector.

   An interrupt vector takes 2 writes to update it so you must disable interrupt while doing it in case an interrupt occurs while the new vector is only half updated.

   Setting and clearing of the interrupt flag takes care of itself during the interrupt routine so you shouldn't need to mess with it then.


<<I suspected that someone may pick up on the amount of interupts per second>>

   2000 interrupts per second is no big deal. From memory, the interrupt response time for that type of CPU is around 5us so, if the interrupt handler is short you could cope with 10,000+ interrupts per second without difficulty.


<<So, are you saying that when an interupt is detected by DOS>>

   Interrupts aren't detected by DOS. They're a hardware thing.
   When the CPU detects an interrupt it pushes the flags and instruction pointer, clears the interrupt enable flag and calls the interrupt routine through the required vector.
   So, when entering the interupt service routine the interrupt enable flag is always zero, preventing further interrupts until the IRET instruction which restores the original flags, including the set IF, which re-enables the interrupts.

   It has to be done this way as the interrupt is a level sensitve input. If the interrupts were not disabled immediately by the CPU then the interupt would immediately occur again and the CPU would stall.

   The flag can be set by the programmer early (before IRET) but this isn't usually needed as the interrupt routine should not take long to execute.
   If you do have a very involved interrupt service routine then you can re-enable interrupt but you must first clear the cause of the interrupt otherwise the same interrupt will immediately re-occur and you'll crash the CPU.

   <<Sounds like the conclusion is that I dont need to cli if I am chaining to the existing interupt handler>>

   That's right.



Michael,
   <<In my references the clock frequency for the system timer is >>

   The timer runs at 1,193,181.666Hz.
   This comes from the color sub carrier frequency of NTSC TVs which was 3.579545MHz. The crystal used to produce this ran at 4x that frequency = 14.318180MHz.
   When the PC was designed, that crystal was far cheaper than any other because it was made by the million for TVs so that was chosen as the reference frequency for all PCs.

   The PC divides it by 12 instead of 4 to give 14318180/12=1.19318166MHz


Dinosaur

Dioxin, many thanks for your reply.

Quote
It has to be done this way as the interrupt is a level sensitve input. If the interrupts were not disabled immediately by the CPU then the interupt would immediately occur again and the CPU would stall.

My experience does not support this. I was under the impression that the IRQ line is triggered on a rising edge.
The signal I use the trigger the IRQ is Lo for about 140uSec and when it goes Hi, stays Hi for 360uSec.
The handler simply reads two 8bit words from an address and then chains to the Old Handler.
Running it in Codeview and watching the IRQ on a Cro confirms that my handler only gets the attention after each Lo to Hi transition. ?

With reference to the timer clock, I will have to test to see, what the increment is.

Regards
Dinosaur

dioxin

Dinosaur,
   the source of confusion is that the INTR line on the CPU is not the line you are driving. You probably have a PIC (Programmable Interrupt Controller, usually 8259A) which drives the CPU INTR line and you'll be driving one of the IRx lines of the PIC.
   The PIC can be programmed to be either level or edge triggered.
   The CPU INTR line is always level triggered.

   The PC designers made an error when they first built the PC by programming the PIC to be edge triggered. This made it difficult to share interrupt lines between processes.
   
Paul.

MichaelW

Hi Paul,

Now that you mention it I recall IBM's choice of a common TV crystal, and that the frequency was divided by 3 to derive the 4.77 MHz processor and bus clocks (for the original PC), and divided again by 4 to derive the timer input frequency. But it seems to me that the frequency tolerance in combination with temperature effects would limit the significant digits to around 5, so even the 1193180 value was probably over-specified.

Quote
If you do have a very involved interrupt service routine then you can re-enable interrupt but you must first clear the cause of the interrupt otherwise the same interrupt will immediately re-occur and you'll crash the CPU.
This would be so for an MCA system, where the IRQ lines are level triggered, or for an EISA system if the particular IRQ were operating in level triggered mode, but not for a "standard" PC (or at least not without reprogramming the interrupt controller). Per the Intel documentation for the 8259A: "The IR input can remain high without generating another interrupt." The attachment contains a small program that demonstrates this.


[attachment deleted by admin]
eschew obfuscation