NUMTOKEN()

NUMTOKEN()

Onliner

Retrieves the number of tokens in a string

Syntax

       NUMTOKEN( <cString>, [<cTokenizer>], [<nSkipWidth>] ) -> nTokenCount

Arguments

<cString> Designates the string that is passed.

<cDelimiter> Designates the delimiter list used by the passer.

<nSkipWidth> Designates after what number of delimiter characters or sequences to count a token. This is helpful for counting empty tokens. The default value indicates that empty tokens are not taken into account.

Returns

The number of tokens contained in the <cString> is returned.

Description

Use NUMTOKEN() to determine how many words (or tokens) are contained in the character string. The function uses the following list of delimiters as a standard: CHR 32, 0, 9, 10, 13, 26, 32, 138, 141 and the characters , .;:!?/\<<>>()ˆ#&%+-* The list can be replaced by your own list of delimiters, <cDelimiter>. Here are some examples of useful delimiters:

      ---------------------------------------------------------------------
      Description         <cDelimiter>
      ---------------------------------------------------------------------
      Pages               CHR(12)(Form Feed)
      Sentences           ".!?"
      File Names          ":\."
      Numerical strings   ",."
      Date strings        "/."
      Time strings        ":."
      ---------------------------------------------------------------------

The skip value designates the number of characters after which a token is counted again. This also allows empty tokens, like blanks within a string, to be counted.

Examples

       .  A character string is searched using the standard delimiter
          list:
          ? NUMTOKEN("Good Morning!")      // Result: 2
       .  Your own list of delimiters can be specified for particular
          reasons.  Since the delimiter list for the following example only
          contains the characters ".!?", the result is 3.
          ? NUMTOKEN("Yes!  That's it. Maybe not?", ".!?")
       .  This example shows how to count empty tokens.  Parameters
          separated by commas are counted, but some of the parameters are
          skipped.  A token is counted after at least one delimiter (comma):
          String  :=  "one,two,,four"
          ? NUMTOKEN(String, ", ", 1)      // Result: 4

Tests

       numtoken( "Hello, World!" ) ==  2
       numtoken( "This is good. See you! How do you do?", ".!?" ) == 3
       numtoken( "one,,three,four,,six", ",", 1 ) ==  6

Compliance

NUMTOKEN() is compatible with CT3’s NUMTOKEN().

Platforms

All

Files

Source is token1.c, library is libct.

Seealso

TOKEN(), ATTOKEN(), TOKENLOWER(), TOKENUPPER(), TOKENSEP()

TokenSep()

TokenSep()

Retrieves the token separators of the last token() call

Syntax

      TokenSep( [<lMode>] ) -> cSeparator

Arguments

[<lMode>] if set to .T., the token separator BEHIND the token retrieved from the token() call will be returned. Default: .F., returns the separator BEFORE the token

Returns

Depending on the setting of <lMode>, the separating character of the the token retrieved from the last token() call will be returned. These separating characters can now also be retrieved with the token() function.

Description

When one does extract tokens from a string with the token() function, one might be interested in the separator characters that have been used to extract a specific token. To get this information you can either use the TokenSep() function after each token() call, or use the new 5th and 6th parameter of the token() function.

Examples

      see TOKEN() function

Compliance

TokenSep() is compatible with CT3’s TokenSep().

Platforms

All

Files

Source is token1.c, library is libct.

Seealso

TOKEN(), NUMTOKEN(), ATTOKEN(), TOKENLOWER(), TOKENUPPER()

TokenLower()

TokenLower()

Change the first letter of tokens to lower case

Syntax

      TokenLower( <[@]cString>, [<cTokenizer>], [<nTokenCount>],
                  [<nSkipWidth>] ) -> cString

Arguments

<[@]cString> is the processed string

[<cTokenizer>] is a list of characters separating the tokens in <cString> Default: chr(0) + chr(9) + chr(10) + chr(13) + chr(26) + chr(32) + chr(32) + chr(138) + chr(141) + “, .;:!\?/\\<>()#&%+-*”

[<nTokenCount>] specifies the number of tokens that should be processed Default: all tokens

[<nSkipWidth>] specifies the maximum number of successive tokenizing characters that are combined as ONE token stop, e.g. specifying 1 can yield to empty token Default: 0, any number of successive tokenizing characters are combined as ONE token stop

Returns

<cString> the string with the lowercased tokens

Description

The TokenLower() function changes the first letter of tokens in <cString> to lower case. To do this, it uses the same tokenizing mechanism as the token() function. If TokenLower() extracts a token that starts with a letter, this letter will be changed to lower case.

You can omit the return value of this function by setting the CSETREF() switch to .T., but you must then pass <cString> by reference to get the result.

Examples

      ? TokenLower( "Hello, World, here I am!" )         
                    // "hello, world, here i am!"
      ? TokenLower( "Hello, World, here I am!",, 3 )     
                    // "hello, world, here I am!"
      ? TokenLower( "Hello, World, here I am!", ",", 3 ) 
                    // "hello, World, here I am!"
      ? TokenLower( "Hello, World, here I am!", " W" )   
                    // "hello, World, here i am!"

Tests

      TokenLower( "Hello, World, here I am!" )         
               == "hello, world, here i am!"
      TokenLower( "Hello, World, here I am!",, 3 )     
               == "hello, world, here I am!"
      TokenLower( "Hello, World, here I am!", ",", 3 ) 
               == "hello, World, here I am!"
      TokenLower( "Hello, World, here I am!", " W" )   
               == "hello, World, here i am!"

Compliance

TokenLower() is compatible with CT3’s TokenLower(), but a new 4th parameter, <nSkipWidth> has been added for synchronization with the the other token functions.

Platforms

All

Files

Source is token1.c, library is libct.

Seealso

TOKEN(), NUMTOKEN(), ATTOKEN(), TOKENUPPER(), TOKENSEP(), CSETREF()

Token()

Token()

Tokens of a string

Syntax

      TOKEN( <cString>, [<cTokenizer>],
             [<nTokenCount], [<nSkipWidth>],
             [<@cPreTokenSep>], [<@cPostTokenSep>] ) -> cToken

Arguments

<cString> is the processed string

[<cTokenizer>] is a list of characters separating the tokens in <cString> Default: chr(0)+chr(9)+chr(10)+chr(13)+chr(26)+ chr(32)+chr(32)+chr(138)+chr(141)+ “, .;:!\?/\\<>()#&%+-*”

[<nTokenCount>] specifies the count of the token that should be extracted Default: last token

[<nSkipWidth>] specifies the maximum number of successive tokenizing characters that are combined as ONE token stop, e.g. specifying 1 can yield to empty token Default: 0, any number of successive tokenizing characters are combined as ONE token stop

[<@cPreTokenSep>] If given by reference, the tokenizer before the actual token will be stored

[<@cPostTokenSep>] If given by reference, the tokenizer after the actual token will be stored

Returns

<cToken> the token specified by the parameters given above

Description

The TOKEN() function extracts the <nTokenCount>th token from the string <cString>. In the course of this, the tokens in the string are separated by the character(s) specified in <cTokenizer>. The function may also extract empty tokens, if you specify a skip width other than zero.

Be aware of the new 5th and 6th parameter there the TOKEN() function stores the tokenizing character before and after the extracted token. Therefore, additional calls to the TOKENSEP() function are not necessary.

Examples

      ? token( "Hello, World!" )            -->  "World"
      ? token( "Hello, World!",, 2, 1 )     --> ""
      ? token( "Hello, World!", ",", 2, 1 ) --> " World!"
      ? token( "Hello, World!", " ", 2, 1 ) --> "World!"

Tests

      token( "Hello, World!" )            == "World"
      token( "Hello, World!",, 2, 1 )     == ""
      token( "Hello, World!", ",", 2, 1 ) == " World!"
      token( "Hello, World!", " ", 2, 1 ) == "World!"

Compliance

TOKEN() is compatible with CT3’s TOKEN, but two additional parameters have been added there the TOKEN() function can store the tokenizers before and after the current token.

Platforms

All

Files

Source is token1.c, library is libct.

Seealso

NUMTOKEN(), ATTOKEN(), TOKENLOWER(), TOKENUPPER(), TOKENSEP()

CSetRef()

CSetRef()

Determine return value of reference sensitive CT3 string functions

Syntax

      CSetRef( [<lNewSwitch>] ) -> lOldSwitch

Arguments

[<lNewSwitch>] .T. -> suppress return value .F. -> do not suppress return value

Returns

lOldSwitch old (if lNewSwitch is a logical value) or current state of the switch

Description

Within the CT3 functions, the following functions do not change the length of a string passed as parameter while transforming this string:

ADDASCII() BLANK() CHARADD() CHARAND() CHARMIRR() CHARNOT() CHAROR() CHARRELREP() CHARREPL() CHARSORT() CHARSWAP() CHARXOR() CRYPT() JUSTLEFT() JUSTRIGHT() POSCHAR() POSREPL() RANGEREPL() REPLALL() REPLLEFT() REPLRIGHT() TOKENLOWER() TOKENUPPER() WORDREPL() WORDSWAP()

Thus, these functions allow to pass the string by reference [@] to the function so that it may not be necessary to return the transformed string. By calling CSetRef (.T.), the above mentioned functions return the value .F. instead of the transformed string if the string is passed by reference to the function. The switch is turned off (.F.) by default.

Compliance

This function is fully CT3 compatible.

Platforms

All

Files

Source is ctstr.c, library is ct3.

Seealso

ADDASCII(), BLANK(), CHARADD(), CHARAND(), CHARMIRR(), CHARNOT(), CHAROR(), CHARRELREP(), CHARREPL(), CHARSORT(), CHARSWAP(), CHARXOR(), CRYPT(), JUSTLEFT(), JUSTRIGHT(), POSCHAR(), POSREPL(), RANGEREPL(), REPLALL(), REPLLEFT(), REPLRIGHT(), TOKENLOWER(), TOKENUPPER(), WORDREPL(), WORDSWAP()

AtToken()

AtToken()

Position of a token in a string

Syntax

      AtToken( <cString>, [<cTokenizer>],
               [<nTokenCount>], [<nSkipWidth>] ) -> nPosition

Arguments

<cString> is the processed string

[<cTokenizer>] is a list of characters separating the tokens in <cString> Default: chr(0) + chr(9) + chr(10) + chr(13) + chr(26) +  chr(32) + chr(32) + chr(138) + chr(141) +  “, .;:!\?/\\<>()#&%+-*”

[<nTokenCount>] specifies the count of the token whose position should be calculated Default: last token

[<nSkipWidth>] specifies the maximum number of successive tokenizing characters that are combined as ONE token stop, e.g. specifying 1 can yield to empty tokens Default: 0, any number of successive tokenizing characters are combined as ONE token stop

Returns

<nPosition> The start position of the specified token or 0 if such a token does not exist in <cString>.

Description

The AtToken() function calculates the start position of tne <nTokenCount>th token in <cString>. By setting the new <nSkipWidth> parameter to a value different than 0, you can specify how many tokenizing characters are combined at most to one token stop. Be aware that this can result to empty tokens there the start position is not defined clearly. Then, AtToken() returns the position there the token WOULD start if its length is larger than 0. To check for empty tokens, simply look if the character at the returned position is within the tokenizer list.

Examples

      AtToken( "Hello, World!" ) // --> 8  // empty strings after tokenizer
                                           // are not a token !

Tests

      AtToken( "Hello, World!" ) == 8
      AtToken( "Hello, World!",, 2 ) == 8
      AtToken( "Hello, World!",, 2, 1 ) == 7
      AtToken( "Hello, World!", " ", 2, 1 ) == 8

Compliance

AtToken() is compatible with CT3’s AtToken, but has an additional 4th parameter to let you specify a skip width equal to that in the TOKEN() function.

Platforms

All

Files

Source is token1.c, library is libct.

Seealso

Token(), NumToken(), TokenLower(), TokenUpper(), TokenSep()

Harbour All Functions – T

TabExpand
TabPack

Tan

TanH

TBrowseDB

TBrowseNew

TFileRead

THtml

Time

TimeValid

TNortonGuide 

Token
TokenAt
TokenEnd
TokenExit
TokenInit
TokenLower
TokenNext
TokenNum
TokenSep
TokenUpper

Tone

TOs2

Transform
Trim

TRtf

TTroff

 Type

String Functions

AddASCII

AfterAtNum

AllTrim
Asc

ASCIISum

ASCPos
At

AtAdjust

AtNum
AtRepl
AtToken

BeforAtNum

Chr

CharAdd
CharAnd
CharEven
CharHist
CharList
CharMirr
CharMix
CharNoList
CharNot
CharOdd
CharOne
CharOnly
CharOr
CharPix
CharRela
CharRelRep
CharRem
CharRepl
CharRLL
CharRLR
CharSHL
CharSHR
CharSList
CharSort
CharSub
CharSwap
CharWin
CharXOR

CountLeft
CountRight
Descend
Empty
hb_At
hb_RAt
hb_ValToStr
IsAlpha
IsDigit
IsLower
IsUpper

JustLeft
JustRight

Left
Len
Lower
LTrim

NumAt
NumToken
PadLeft
PadRight

PadC
PadL
PadR

POSALPHA
POSCHAR
POSDEL
POSDIFF
POSEQUAL
POSINS
POSLOWER
POSRANGE
POSREPL
POSUPPER

RangeRem
RangeRepl

RAt

RemAll

RemLeft
RemRight
ReplAll

Replicate

ReplLeft

ReplRight

RestToken

Right
RTrim

SaveToken

SetAtLike
Space
Str

StrDiff

StrFormat

StrSwap

StrTran
StrZero
SubStr

TabExpand
TabPack

Token

TokenAt
TokenEnd
TokenExit
TokenInit
TokenLower
TokenNext
TokenNum
TokenSep
TokenUpper

Transform
Trim
Upper
Val

ValPos
WordOne
WordOnly
WordRem
WordRepl
WordSwap

WordToChar


CT_TOKENUPPER

 TOKENUPPER()
 Converts the initial letter of a token into upper case
------------------------------------------------------------------------------
 Syntax

     TOKENUPPER(<cString>,[<cDelimiter>],[<nNumber>])
        --> cString

 Arguments

     <cString>  [@]  Designates the string that is searched for tokens
     (words).

     <cDelimiter>  Designates the delimiter list used by the token.

     <nNumber>  Designates the number of tokens in which the initial
     letter is converted into upper case.  The default value converts all
     tokens.

 Returns

     The processed character string is returned.

 Description

     The TOKENUPPER() function converts the initial letter of the tokens in
     the <cString> into an upper case letter.  If you specify a value for
     <nNumber>, only this number of tokens is processed.  If you do not
     specify a value for <nNumber>, all tokens in <cString> are processed.
     The function uses the following list of delimiters by default:

     CHR 32, 0, 9, 10, 13, 26, 32, 138, 141

     and the characters ,.;:!?/\<<>>()^#&%+-*

     The list can be replaced by your own list of delimiters, <cDelimiter>.
     Here are some examples of useful delimiters:

     Table 4-7:  String Manipulations
     ------------------------------------------------------------------------
     Description         <cDelimiter>
     ------------------------------------------------------------------------
     Pages               CHR(12)(Form Feed)
     Sentences           ".!?"
     File Names          ":\."
     Numerical strings   ",."
     Date strings        "/."
     Time strings        ":."
     ------------------------------------------------------------------------

 Note

     .  The return value of this function can be suppressed by
        implementing CSETREF() to save space in working memory.

 Examples

     .  Convert the initial letter of all tokens into upper case
        letters:

        ? TOKENUPPER("Good morning")            //"Good Morning"

     .  There are no detrimental effects if a larger number of tokens
        is specified than are available:

        ?TOKENUPPER("Good morning",, 5)         //"Good Morning"

     .  Process the first two tokens using your own delimiter:

        ?TOKENUPPER("/ab/ab/ab", "/", 2)        //"Ab/Ab/ab"

See Also: NUMTOKEN() TOKEN() ATTOKEN() TOKENLOWER() CSETREF()

 

Tools – String Manipulations

Introduction 
ADDASCII()   Adds a value to each ASCII code in a string
AFTERATNUM() Returns remainder of a string after nth appearance of sequence
ASCIISUM()   Finds sum of the ASCII values of all the characters of a string
ASCPOS()     Determines ASCII value of a character at a position in a string
ATADJUST()   Adjusts the beginning position of a sequence within a string
ATNUM()      Determines the starting position of a sequence within a string
ATREPL()     Searches for a sequence within a string and replaces it
ATTOKEN()    Finds the position of a token within a string
BEFORATNUM() Returns string segment before the nth occurrence of a sequence
CENTER()     Centers a string using pad characters
CHARADD()    Adds the corresponding ASCII codes of two strings
CHARAND()    Links corresponding ASCII codes of paired strings with AND
CHAREVEN()   Returns characters in the even positions of a string
CHARLIST()   Lists each character in a string
CHARMIRR()   Mirrors characters within a string
CHARMIX()    Mixes two strings together
CHARNOLIST() Lists the characters that do not appear in a string
CHARNOT()    Complements each character in a string
CHARODD()    Returns characters in the odd positions of a string
CHARONE()    Reduces adjoining duplicate characters in string to 1 character
CHARONLY()   Determines the common denominator between two strings
CHAROR()     Joins the corresponding ASCII code of paired strings with OR
CHARPACK()   Compresses (packs) a string
CHARRELA()   Correlates the character positions in paired strings
CHARRELREP() Replaces characters in a string depending on their correlation
CHARREM()    Removes particular characters from a string
CHARREPL()   Replaces certain characters with others
CHARSORT()   Sorts sequences within a string
CHARSPREAD() Expands a string at the tokens
CHARSWAP()   Exchanges all adjoining characters in a string
CHARUNPACK() Decompresses (unpacks) a string
CHARXOR()    Joins ASCII codes of paired strings with exclusive OR operation
CHECKSUM()   Calculates the checksum for a character string (algorithm)
COUNTLEFT()  Counts a particular character at the beginning of a string
COUNTRIGHT() Counts a particular character at the end of a string
CRYPT()      Encrypts and decrypts a string
CSETATMUPA() Determines setting of the multi-pass mode for ATXXX() functions
CSETREF()    Determines whether reference sensitive functions return a value
EXPAND()     Expands a string by inserting characters
JUSTLEFT()   Moves characters from the beginning to the end of a string
JUSTRIGHT()  Moves characters from the end of a string to the beginning
LIKE()       Compares character strings using wildcard characters
LTOC()       Converts a logical value into a character
MAXLINE()    Finds the longest line within a string
NUMAT()      Counts the number of occurrences of a sequence within a string
NUMLINE()    Determines the number of lines required for string output
NUMTOKEN()   Determines the number of tokens in a string
PADLEFT()    Pads a string on the left to a particular length
PADRIGHT()   Pads a string on the right to a particular length
POSALPHA()   Determines position of first alphabetic character in a string
POSCHAR()    Replaces individual character at particular position in string
POSDEL()     Deletes characters at a particular position in a string
POSDIFF()    Finds the first position from which two strings differ
POSEQUAL()   Finds the first position at which two strings are the same
POSINS()     Inserts characters at a particular position within a string
POSLOWER()   Finds the position of the first lower case alphabetic character
POSRANGE()   Determines position of first character in an ASCII code range
POSREPL()    Replaces one or more characters from a certain position
POSUPPER()   Finds the position of the first uppercase, alphabetic character
RANGEREM()   Deletes characters that are within a specified ASCII code range
RANGEREPL()  Replaces characters within a specified ASCII code range
REMALL()     Removes characters from the beginning and end of a string
REMLEFT()    Removes particular characters from the beginning of a string
REMRIGHT()   Removes particular characters at the end of a string
REPLALL()    Exchanges characters at the beginning and end of a string
REPLLEFT()   Exchanges particular characters at the beginning of a string
REPLRIGHT()  Exchanges particular characters at the end of a string
RESTTOKEN()  Recreates an incremental tokenizer environment
SAVETOKEN()  Saves the incremental tokenizer environment to a variable
SETATLIKE()  Provides an additional search mode for all AT functions
STRDIFF()    Finds similarity between two strings (Levenshtein Distance)
STRSWAP()    Interchanges two strings
TABEXPAND()  Converts tabs to spaces
TABPACK()    Converts spaces in tabs
TOKEN()      Selects the nth token from a string
TOKENAT()    Determines the most recent TOKENNEXT() position within a string
TOKENEND()   Determines if more tokens are available in TOKENNEXT()
TOKENINIT()  Initializes a string for TOKENNEXT()
TOKENLOWER() Converts initial alphabetic character of a token into lowercase
TOKENNEXT()  Provides an incremental tokenizer
TOKENSEP()   Provides separator before/after most recently retrieved TOKEN()
TOKENUPPER() Converts the initial letter of a token into upper case
VALPOS()     Determines numerical value of character at particular position
WORDONE()    Reduces multiple appearances of double characters to one
WORDONLY()   Finds common denominator of 2 strings on double character basis
WORDREPL()   Replaces particular double characters with others
WORDSWAP()   Exchanges double characters lying beside each other in a string
WORDTOCHAR() Exchanges double characters for individual ones