Skip to main content

%SQL.Util.RowType

Property Inventory

Method Inventory

Properties

property column as %CacheString [ MultiDimensional ];
This is a temporary structure - perhaps. Normally the output from a parser is a parse tree. This is a pt -of sorts. column - Number of columns column(n) - Column name column(n,1) - Column SQL type column(n,2) - type column(n,2,) - value of type parameter
Property methods: columnGet(), columnIsValid(), columnSet()
property currentLexeme as %CacheString;
The current lexeme value. It make be a composite of several tokens.
Property methods: currentLexemeGet(), currentLexemeIsValid(), currentLexemeSet()
property debugMode as %Integer [ InitialExpression = $$$CompileDebugMode ];
Property methods: debugModeDisplayToLogical(), debugModeGet(), debugModeIsValid(), debugModeLogicalToDisplay(), debugModeNormalize(), debugModeSet()
property source as %Stream.Object;
The source is the text to be parsed. Files are bound to a file stream object, arrays are copied to a global stream object.
Property methods: sourceDelete(), sourceGet(), sourceGetObject(), sourceGetObjectId(), sourceGetSwizzled(), sourceIsValid(), sourceNewObject(), sourceOid(), sourceOpen(), sourceSet(), sourceSetObject(), sourceSetObjectId(), sourceUnSwizzle()
property sourceLine as %Integer;
The current sourceLine number. Used for error reporting.
Property methods: sourceLineDisplayToLogical(), sourceLineGet(), sourceLineIsValid(), sourceLineLogicalToDisplay(), sourceLineNormalize(), sourceLineSet()
property token as %CacheString [ MultiDimensional ];
This is an array of tokens. It is managed completely by the tokenizer() and only accessed through public accessor methods, including nextToken.
Property methods: tokenGet(), tokenIsValid(), tokenSet()
property tokenPtr as %Integer;
The pointer to the current token in the source stream
Property methods: tokenPtrDisplayToLogical(), tokenPtrGet(), tokenPtrIsValid(), tokenPtrLogicalToDisplay(), tokenPtrNormalize(), tokenPtrSet()
property transitionStack as %Integer;
The state transition stack. Each production pushes the next state onto this stack If continuation is necessary then the production first pushes a return state onto the transition stack. States are simple integers, 1 being the initial state. Continuations are the same integer followed by a "c".
Property methods: transitionStackDisplayToLogical(), transitionStackGet(), transitionStackIsValid(), transitionStackLogicalToDisplay(), transitionStackNormalize(), transitionStackSet()

Methods

classmethod GenerateProperties(pClass As %Dictionary.ClassDefinition, ByRef pColumn, ByRef pSequence As %Integer = 0)
pMetadata and pObjects are generated metadata from the row type as contained in pColumn.
method consumeWhite()
method delimitedToken(pBegin, pEnd)
Extract a delimited token from the lexeme stream. If the current lexeme is the pBegin value then consume all tokens from the current position until the pEnd token is found and is not nested and is not in a string.
method lookAheadSkipWhite(pLexPtr)
This function looks ahead to the first non-white token.
method lookAheadToken(pLexPtr)
classmethod macroDefs()
method nestedStringLiteral(pQuoteChar As %String = """")
method nextToken(pTerminators, pStrip=0, pStringAsToken As %Integer = 0, pQuoteChars As %String = """")
This should really be using regular expressions and matching the longest possible lexeme against the valid lookaheads. For now, just look for a terminator in the pTerminators string. If pStringAsToken is true then look for a leading quote character. If found then invoke nestedStringLiteral to consume all tokens up to the ending quote and return the nested string as a single token.
method parse(ByRef pColumns As %CacheString)
classmethod parseArray(pSource As %CacheString, ByRef pColumns As %CacheString)
classmethod parseFile(pFilename, ByRef pColumns As %CacheString)
method s1()
STATE: 1 - The initial state production line. LOOKAHEADS: rowtype | objecttype | rowtype_body
method s1c()
STATE: 1c - continuation of the initial state
method s2()
STATE: 2 - ROW LOOKAHEADS: rowtype_body
method s3()
STATE: 3 - OBJECT LOOKAHEADS: objecttype_body
method s4()
STATE: 4 - rowtype_body ::= LOOKAHEADS: sql_identifier | WHITESPACE | COMMA | right_paren
method s5()
STATE: 5 - objecttype_body ::= LOOKAHEADS: object_identifier | WHITESPACE | COMMA | right_paren
method s6()
STATE: 6 - field_definition ::= field_name data_type LOOKAHEADS: sql_identifier ( ::= ::= | )
method s6c()
method s7()
STATE: 7 - property_definition =:: LOOKAHEADS:
method s8()
STATE: datatype ::= | | | | LOOKAHEADS:
method simpleKeywordValue(pDefault="")
method terminatedKeywordValue(pTerminator=";", pIgnoreString=0)
#; terminatedKeywordValue(pTerminator,pIgnoreString) #; #; This function scans the source until the token pTerminator in encountered in the stream. #; If pIgnoreString is TRUE then pTerminator can be found anywhere, otherwise pTerminator #; is only recognized if it is not in a quoted string.
method tokenizer()
method transition()

Inherited Members

Inherited Methods

FeedbackOpens in a new tab