TokenUpper()

TokenUpper()

Change the first letter of tokens to upper case

Syntax

      TokenUpper( <[@]cString>, [<cTokenizer>], [<nTokenCount>],
                  [<nSkipWidth>] ) -> cString

Arguments

<[@]cString> is the processed string

[<cTokenizer>] is a list of characters separating the tokens in <cString> Default: chr(0) + chr(9) + chr(10) + chr(13) + chr(26) + chr(32) + chr(32) + chr(138) + chr(141)+ “, .;:!\?/\\<>()#&%+-*”

[<nTokenCount>] specifies the number of tokens that should be processed Default: all tokens

[<nSkipWidth>] specifies the maximum number of successive tokenizing characters that are combined as ONE token stop, e.g. specifying 1 can yield to empty token Default: 0, any number of successive tokenizing characters are combined as ONE token stop

Returns

<cString> the string with the uppercased tokens

Description

The TokenUpper() function changes the first letter of tokens in <cString> to upper case. To do this, it uses the same tokenizing mechanism as the token() function. If TokenUpper() extracts a token that starts with a letter, this letter will be changed to upper case.

You can omit the return value of this function by setting the CSETREF() switch to .T., but you must then pass <cString> by reference to get the result.

Examples

      ? TokenUpper( "Hello, world, here I am!" )         
                 // "Hello, World, Here I Am!"
      ? TokenUpper( "Hello, world, here I am!",, 3 )     
                 // "Hello, World, Here I am!"
      ? TokenUpper( "Hello, world, here I am!", ",", 3 ) 
                 // "Hello, world, here I am!"
      ? TokenUpper( "Hello, world, here I am!", " w" )   
                 // "Hello, wOrld, Here I Am!"

Tests

      TokenUpper( "Hello, world, here I am!" )         == 
                  "Hello, World, Here I Am!"
      TokenUpper( "Hello, world, here I am!",, 3 )     == 
                  "Hello, World, Here I am!"
      TokenUpper( "Hello, world, here I am!", ",", 3 ) == 
                  "Hello, world, here I am!"
      TokenUpper( "Hello, world, here I am!", " w" )   == 
                  "Hello, wOrld, Here I Am!"

Compliance

TokenUpper() is compatible with CT3’s TokenUpper(), but a new 4th parameter, <nSkipWidth> has been added for synchronization with the the other token functions.

Platforms

All

Files

Source is token1.c, library is libct.

Seealso

TOKEN(), NUMTOKEN(), ATTOKEN(), TOKENLOWER(), TOKENSEP(), CSETREF()

TokenSep()

TokenSep()

Retrieves the token separators of the last token() call

Syntax

      TokenSep( [<lMode>] ) -> cSeparator

Arguments

[<lMode>] if set to .T., the token separator BEHIND the token retrieved from the token() call will be returned. Default: .F., returns the separator BEFORE the token

Returns

Depending on the setting of <lMode>, the separating character of the the token retrieved from the last token() call will be returned. These separating characters can now also be retrieved with the token() function.

Description

When one does extract tokens from a string with the token() function, one might be interested in the separator characters that have been used to extract a specific token. To get this information you can either use the TokenSep() function after each token() call, or use the new 5th and 6th parameter of the token() function.

Examples

      see TOKEN() function

Compliance

TokenSep() is compatible with CT3’s TokenSep().

Platforms

All

Files

Source is token1.c, library is libct.

Seealso

TOKEN(), NUMTOKEN(), ATTOKEN(), TOKENLOWER(), TOKENUPPER()

TokenNum()

TokenNum()

Get the total number of tokens in a token environment

Syntax

      TokenNum( [<@cTokenEnvironment>] ) -> nNumberofTokens

Arguments

<@cTokenEnvironment> a token environment

Returns

<nNumberofTokens> number of tokens in the token environment

Description

The TokenNum() function can be used to retrieve the total number of tokens in a token environment. If the parameter <@cTokenEnvironment> is supplied (must be by reference), the information from this token environment is used, otherwise the global token environment is used.

Examples

      tokeninit( "a.b.c.d", ".", 1 )  // initialize global token environment
      ? TokenNum()  // --> 4

Compliance

TokenNum() is a new function in Harbour’s CT3 library.

Platforms

All

Files

Source is token2.c, library is libct.

Seealso

TOKENINIT(), TOKENEXIT(), TOKENNEXT(), TOKENAT(), SAVETOKEN(), RESTTOKEN(), TOKENEND()

TokenNext()

TokenNext()

Successivly obtains tokens from a string

Syntax

      TokenNext( <[@]cString>, [<nToken>],
                 [<@cTokenEnvironment>] ) -> cToken

Arguments

<[@]cString> the processed string <nToken> a token number

<@cTokenEnvironment> a token environment

Returns

<cToken> a token from <cString>

Description

With TokenNext(), the tokens determined with the TOKENINIT() functions can be retrieved. To do this, TokenNext() uses the information stored in either the global token environment or the local one supplied by <cTokenEnvironment>. Note that, is supplied, this 3rd parameter has always to be passed by reference.

If the 2nd parameter, <nToken> is given, TokenNext() simply returns the <nToken>th token without manipulating the TE counter. Otherwise the token pointed to by the TE counter is returned and the counter is incremented by one. Like this, a simple loop with TOKENEND() can be used to retrieve all tokens of a string successivly.

Note that <cString> does not have to be the same used in TOKENINIT(), so that one can do a “correlational tokenization”, i.e. tokenize a string as if it was another! E.G. using TOKENINIT() with the string “AA, BBB” but calling TokenNext() with “CCCEE” would give first “CC” and then “EE” (because “CCCEE” is not long enough).

Examples

      // default behavhiour
      tokeninit( cString ) // initialize a token environment
      DO WHILE ! tokenend()
         ? TokenNext( cString )  // get all tokens successivly
      ENDDO
      ? TokenNext( cString, 3 )  // get the 3rd token, counter will remain 
                                 // the same
      tokenexit()                // free the memory used for the global 
                                 // token environment

Compliance

TokenNext() is compatible with CT3’s TokenNext(), but there are two additional parameters featuring local token environments and optional access to tokens.

Platforms

All

Files

Source is token2.c, library is libct.

Seealso

TOKENINIT(), TOKENEXIT(), TOKENNUM(), TOKENAT(), SAVETOKEN(), RESTTOKEN(), TOKENEND()

TokenLower()

TokenLower()

Change the first letter of tokens to lower case

Syntax

      TokenLower( <[@]cString>, [<cTokenizer>], [<nTokenCount>],
                  [<nSkipWidth>] ) -> cString

Arguments

<[@]cString> is the processed string

[<cTokenizer>] is a list of characters separating the tokens in <cString> Default: chr(0) + chr(9) + chr(10) + chr(13) + chr(26) + chr(32) + chr(32) + chr(138) + chr(141) + “, .;:!\?/\\<>()#&%+-*”

[<nTokenCount>] specifies the number of tokens that should be processed Default: all tokens

[<nSkipWidth>] specifies the maximum number of successive tokenizing characters that are combined as ONE token stop, e.g. specifying 1 can yield to empty token Default: 0, any number of successive tokenizing characters are combined as ONE token stop

Returns

<cString> the string with the lowercased tokens

Description

The TokenLower() function changes the first letter of tokens in <cString> to lower case. To do this, it uses the same tokenizing mechanism as the token() function. If TokenLower() extracts a token that starts with a letter, this letter will be changed to lower case.

You can omit the return value of this function by setting the CSETREF() switch to .T., but you must then pass <cString> by reference to get the result.

Examples

      ? TokenLower( "Hello, World, here I am!" )         
                    // "hello, world, here i am!"
      ? TokenLower( "Hello, World, here I am!",, 3 )     
                    // "hello, world, here I am!"
      ? TokenLower( "Hello, World, here I am!", ",", 3 ) 
                    // "hello, World, here I am!"
      ? TokenLower( "Hello, World, here I am!", " W" )   
                    // "hello, World, here i am!"

Tests

      TokenLower( "Hello, World, here I am!" )         
               == "hello, world, here i am!"
      TokenLower( "Hello, World, here I am!",, 3 )     
               == "hello, world, here I am!"
      TokenLower( "Hello, World, here I am!", ",", 3 ) 
               == "hello, World, here I am!"
      TokenLower( "Hello, World, here I am!", " W" )   
               == "hello, World, here i am!"

Compliance

TokenLower() is compatible with CT3’s TokenLower(), but a new 4th parameter, <nSkipWidth> has been added for synchronization with the the other token functions.

Platforms

All

Files

Source is token1.c, library is libct.

Seealso

TOKEN(), NUMTOKEN(), ATTOKEN(), TOKENUPPER(), TOKENSEP(), CSETREF()

TokenInit()

TokenInit()

Initializes a token environment

Syntax

      TokenInit( <[@]cString>], [<cTokenizer>], [<nSkipWidth>],
                 [<@cTokenEnvironment>] ) -> lState

Arguments

<[@]cString> is the processed string

<cTokenizer> is a list of characters separating the tokens in <cString> Default: chr(0) + chr(9) + chr(10) + chr(13) + chr(26) +  chr(32) + chr(32) + chr(138) + chr(141) +  “, .;:!\?/\\<>()#&%+-*”

<nSkipWidth> specifies the maximum number of successive tokenizing characters that are combined as ONE token stop, e.g. specifying 1 can yield to empty token Default: 0, any number of successive tokenizing characters are combined as ONE token stop

<@cTokenEnvironment> is a token environment stored in a binary encoded string

Returns

<lState> success of the initialization

Description

The TokenInit() function initializes a token environment. A token environment is the information about how a string is to be tokenized. This information is created in the process of tokenization of the string <cString> – equal to the one used in the TOKEN() function with the help of the <cTokenizer> and <nSkipWidth> parameters.

This token environment can be very useful when large strings have to be tokenized since the tokenization has to take place only once whereas the TOKEN() function must always start the tokenizing process from scratch.

Unlike CT3, this function provides two mechanisms of storing the resulting token environment. If a variable is passed by reference as 4th parameter, the token environment is stored in this variable, otherwise the global token environment is used. Do not modify the token environment string directly !

Additionally, a counter is stored in the token environment, so that the tokens can successivly be obtained. This counter is first set to 1. When the TokenInit() function is called without a string a tokenize, the counter of either the global environment or the environment given by reference in the 4th parameter is rewind to 1.

Additionally, unlike CT3, TokenInit() does not need the string <cString> to be passed by reference, since one must provide the string in calls to TOKENNEXT() again.

Examples

  TokenInit( cString )             // tokenize the string <cString> with 
                                   // default rules and store the token 
                                   // environment globally and eventually 
                                   // delete an old global token environment
  TokenInit( @cString )            // no difference in result, but eventually 
                                   // faster, since the string must not be 
  TokenInit()                      // copied rewind counter of global TE to 1
  TokenInit( "1,2,3", "," , 1 )    // tokenize constant string, store in 
                                   // global token environment  
  TokenInit( cString, , 1, @cTE1)  // tokenize cString and store token 
                                   // environment in cTE1 only without 
                                   // overriding global token environment
  TokenInit( cString, , 1, cTE1 )  // tokenize cString and store token 
                                   // environment in GLOBAL token environment 
                                   // since 4th parameter is not given by 
                                   // reference !!!
  TokenInit( ,,, @cTE1 )           // set counter in TE stored in cTE1 to 1

Compliance

TokenInit() is compatible with CT3’s TokenInit(), but there is an additional parameter featuring local token environments.

Platforms

All

Files

Source is token2.c, library is libct.

Seealso

TOKEN(), TOKENEXIT(), TOKENNEXT(), TOKENNUM(), TOKENAT(), SAVETOKEN(), RESTTOKEN(), TOKENEND()

TokenExit()

TokenExit()

Release global token environment

Syntax

      TokenExit() -> lStaticEnvironmentReleased

Returns

<lStaticEnvironmentReleased> .T., if global token environment is successfully released

Description

The TokenExit() function releases the memory associated with the global token environment. One should use it for every tokeninit() using the global token environment. Additionally, TokenExit() is implicitly called from CTEXIT() to free the memory at library shutdown.

Examples

      tokeninit( cString ) // initialize a token environment
      DO WHILE ! tokenend()
         ? tokennext( cString )  // get all tokens successivly
      ENDDO
      ? tokennext( cString, 3 )  // get the 3rd token, counter 
                                 // will remain the same
      TokenExit()                // free the memory used for the 
                                 // global token environment

Compliance

TokenExit() is a new function in Harbour’s CT3 library.

Platforms

All

Files

Source is token2.c, library is libct.

Seealso

TOKENINIT(), TOKENNEXT(), TOKENNUM(), TOKENAT(), SAVETOKEN(), RESTTOKEN(), TOKENEND()

TokenEnd()

TokenEnd()

Check whether additional tokens are available with TOKENNEXT()

Syntax

      TokenEnd( [<@cTokenEnvironment>] ) -> lTokenEnd

Arguments

<@cTokenEnvironment> a token environment

Returns

<lTokenEnd> .T., if additional tokens are available

Description

The TokenEnd() function can be used to check whether the next call to TOKENNEXT() would return a new token. This can not be decided with TOKENNEXT() alone, since an empty token cannot be distinguished from a “no more” tokens.

If the parameter <@cTokenEnvironment> is supplied (must be by reference), the information from this token environment is used, otherwise the global TE is used.

With a combination of TokenEnd() and TOKENNEXT(), all tokens from a string can be retrieved successivly (see example).

Examples

      tokeninit( "a.b.c.d", ".", 1 )  // initialize global token environment
      DO WHILE ! TokenEnd()
         ? tokennext( "a.b.c.d" )     // get all tokens successivly
      ENDDO

Compliance

TokenEnd() is compatible with CT3’s TokenEnd(), but there are is an additional parameter featuring local token environments.

Platforms

All

Files

Source is token2.c, library is libct.

Seealso

TOKENINIT(), TOKENEXIT(), TOKENNEXT(), TOKENNUM(), TOKENAT(), SAVETOKEN(), RESTTOKEN()

TokenAt()

TOKENAT()

Get start and end positions of tokens in a token environment

Syntax

      TOKENAT( [<lSeparatorPositionBehindToken>], [<nToken>],
               [<@cTokenEnvironment>] ) -> nPosition

Arguments

<lSeparatorPositionBehindToken> .T., if TOKENAT() should return the position of the separator character BEHIND the token. Default: .F., return start position of a token.

<nToken> a token number <@cTokenEnvironment> a token environment

Returns

<nPosition> See description

Description

The TOKENAT() function is used to retrieve the start and end position of the tokens in a token environment. Note however that the position of last character of a token is given by tokenat (.T.)-1 !!

If the 2nd parameter, <nToken> is given, TOKENAT() returns the positions of the <nToken>th token. Otherwise the token pointed to by the TE counter, i.e. the token that will be retrieved by TOKENNEXT() _NEXT_ is used.

If the parameter <@cTokenEnvironment> is supplied (must be by reference), the information from this token environment is used, otherwise the global TE is used.

Tests

      tokeninit( cString ) // initialize a token environment
      DO WHILE ! tokenend()
         ? "From", tokenat(), "to", tokenat( .T. ) - 1
         ? tokennext( cString )  // get all tokens successivly
      ENDDO
      ? tokennext( cString, 3 )  // get the 3rd token, 
// counter will remain the same tokenexit() // free the memory used for the
// global token environment

Compliance

TOKENAT() is compatible with CT3’s TOKENAT(), but there are two additional parameters featuring local token environments and optional access to tokens.

Platforms

All

Files

Source is token2.c, library is libct.

Seealso

TOKENINIT(), TOKENEXIT(), TOKENNEXT(), TOKENNUM(), SAVETOKEN(), RESTTOKEN(), TOKENEND()

Token()

Token()

Tokens of a string

Syntax

      TOKEN( <cString>, [<cTokenizer>],
             [<nTokenCount], [<nSkipWidth>],
             [<@cPreTokenSep>], [<@cPostTokenSep>] ) -> cToken

Arguments

<cString> is the processed string

[<cTokenizer>] is a list of characters separating the tokens in <cString> Default: chr(0)+chr(9)+chr(10)+chr(13)+chr(26)+ chr(32)+chr(32)+chr(138)+chr(141)+ “, .;:!\?/\\<>()#&%+-*”

[<nTokenCount>] specifies the count of the token that should be extracted Default: last token

[<nSkipWidth>] specifies the maximum number of successive tokenizing characters that are combined as ONE token stop, e.g. specifying 1 can yield to empty token Default: 0, any number of successive tokenizing characters are combined as ONE token stop

[<@cPreTokenSep>] If given by reference, the tokenizer before the actual token will be stored

[<@cPostTokenSep>] If given by reference, the tokenizer after the actual token will be stored

Returns

<cToken> the token specified by the parameters given above

Description

The TOKEN() function extracts the <nTokenCount>th token from the string <cString>. In the course of this, the tokens in the string are separated by the character(s) specified in <cTokenizer>. The function may also extract empty tokens, if you specify a skip width other than zero.

Be aware of the new 5th and 6th parameter there the TOKEN() function stores the tokenizing character before and after the extracted token. Therefore, additional calls to the TOKENSEP() function are not necessary.

Examples

      ? token( "Hello, World!" )            -->  "World"
      ? token( "Hello, World!",, 2, 1 )     --> ""
      ? token( "Hello, World!", ",", 2, 1 ) --> " World!"
      ? token( "Hello, World!", " ", 2, 1 ) --> "World!"

Tests

      token( "Hello, World!" )            == "World"
      token( "Hello, World!",, 2, 1 )     == ""
      token( "Hello, World!", ",", 2, 1 ) == " World!"
      token( "Hello, World!", " ", 2, 1 ) == "World!"

Compliance

TOKEN() is compatible with CT3’s TOKEN, but two additional parameters have been added there the TOKEN() function can store the tokenizers before and after the current token.

Platforms

All

Files

Source is token1.c, library is libct.

Seealso

NUMTOKEN(), ATTOKEN(), TOKENLOWER(), TOKENUPPER(), TOKENSEP()