Removed vendored gems (#5638).
git-svn-id: svn+ssh://rubyforge.org/var/svn/redmine/trunk@8905 e93f8b46-1217-0410-a6f0-8f06a7374b81
This commit is contained in:
parent
9315039e0a
commit
28338a6f4a
|
@ -1,78 +0,0 @@
|
||||||
--- !ruby/object:Gem::Specification
|
|
||||||
name: coderay
|
|
||||||
version: !ruby/object:Gem::Version
|
|
||||||
hash: 23
|
|
||||||
prerelease:
|
|
||||||
segments:
|
|
||||||
- 1
|
|
||||||
- 0
|
|
||||||
- 0
|
|
||||||
version: 1.0.0
|
|
||||||
platform: ruby
|
|
||||||
authors:
|
|
||||||
- Kornelius Kalnbach
|
|
||||||
autorequire:
|
|
||||||
bindir: bin
|
|
||||||
cert_chain: []
|
|
||||||
|
|
||||||
date: 2011-09-21 00:00:00 Z
|
|
||||||
default_executable: coderay
|
|
||||||
dependencies: []
|
|
||||||
|
|
||||||
description: Fast and easy syntax highlighting for selected languages, written in Ruby. Comes with RedCloth integration and LOC counter.
|
|
||||||
email:
|
|
||||||
- murphy@rubychan.de
|
|
||||||
executables:
|
|
||||||
- coderay
|
|
||||||
extensions: []
|
|
||||||
|
|
||||||
extra_rdoc_files: []
|
|
||||||
|
|
||||||
files:
|
|
||||||
- test/functional/basic.rb
|
|
||||||
- test/functional/examples.rb
|
|
||||||
- test/functional/for_redcloth.rb
|
|
||||||
- test/functional/suite.rb
|
|
||||||
- bin/coderay
|
|
||||||
has_rdoc: true
|
|
||||||
homepage: http://coderay.rubychan.de
|
|
||||||
licenses: []
|
|
||||||
|
|
||||||
post_install_message:
|
|
||||||
rdoc_options: []
|
|
||||||
|
|
||||||
require_paths:
|
|
||||||
- lib
|
|
||||||
required_ruby_version: !ruby/object:Gem::Requirement
|
|
||||||
none: false
|
|
||||||
requirements:
|
|
||||||
- - ">="
|
|
||||||
- !ruby/object:Gem::Version
|
|
||||||
hash: 59
|
|
||||||
segments:
|
|
||||||
- 1
|
|
||||||
- 8
|
|
||||||
- 6
|
|
||||||
version: 1.8.6
|
|
||||||
required_rubygems_version: !ruby/object:Gem::Requirement
|
|
||||||
none: false
|
|
||||||
requirements:
|
|
||||||
- - ">="
|
|
||||||
- !ruby/object:Gem::Version
|
|
||||||
hash: 3
|
|
||||||
segments:
|
|
||||||
- 0
|
|
||||||
version: "0"
|
|
||||||
requirements: []
|
|
||||||
|
|
||||||
rubyforge_project: coderay
|
|
||||||
rubygems_version: 1.6.2
|
|
||||||
signing_key:
|
|
||||||
specification_version: 3
|
|
||||||
summary: Fast syntax highlighting for selected languages.
|
|
||||||
test_files:
|
|
||||||
- test/functional/basic.rb
|
|
||||||
- test/functional/examples.rb
|
|
||||||
- test/functional/for_redcloth.rb
|
|
||||||
- test/functional/suite.rb
|
|
||||||
|
|
|
@ -1,504 +0,0 @@
|
||||||
GNU LESSER GENERAL PUBLIC LICENSE
|
|
||||||
Version 2.1, February 1999
|
|
||||||
|
|
||||||
Copyright (C) 1991, 1999 Free Software Foundation, Inc.
|
|
||||||
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
|
||||||
Everyone is permitted to copy and distribute verbatim copies
|
|
||||||
of this license document, but changing it is not allowed.
|
|
||||||
|
|
||||||
[This is the first released version of the Lesser GPL. It also counts
|
|
||||||
as the successor of the GNU Library Public License, version 2, hence
|
|
||||||
the version number 2.1.]
|
|
||||||
|
|
||||||
Preamble
|
|
||||||
|
|
||||||
The licenses for most software are designed to take away your
|
|
||||||
freedom to share and change it. By contrast, the GNU General Public
|
|
||||||
Licenses are intended to guarantee your freedom to share and change
|
|
||||||
free software--to make sure the software is free for all its users.
|
|
||||||
|
|
||||||
This license, the Lesser General Public License, applies to some
|
|
||||||
specially designated software packages--typically libraries--of the
|
|
||||||
Free Software Foundation and other authors who decide to use it. You
|
|
||||||
can use it too, but we suggest you first think carefully about whether
|
|
||||||
this license or the ordinary General Public License is the better
|
|
||||||
strategy to use in any particular case, based on the explanations below.
|
|
||||||
|
|
||||||
When we speak of free software, we are referring to freedom of use,
|
|
||||||
not price. Our General Public Licenses are designed to make sure that
|
|
||||||
you have the freedom to distribute copies of free software (and charge
|
|
||||||
for this service if you wish); that you receive source code or can get
|
|
||||||
it if you want it; that you can change the software and use pieces of
|
|
||||||
it in new free programs; and that you are informed that you can do
|
|
||||||
these things.
|
|
||||||
|
|
||||||
To protect your rights, we need to make restrictions that forbid
|
|
||||||
distributors to deny you these rights or to ask you to surrender these
|
|
||||||
rights. These restrictions translate to certain responsibilities for
|
|
||||||
you if you distribute copies of the library or if you modify it.
|
|
||||||
|
|
||||||
For example, if you distribute copies of the library, whether gratis
|
|
||||||
or for a fee, you must give the recipients all the rights that we gave
|
|
||||||
you. You must make sure that they, too, receive or can get the source
|
|
||||||
code. If you link other code with the library, you must provide
|
|
||||||
complete object files to the recipients, so that they can relink them
|
|
||||||
with the library after making changes to the library and recompiling
|
|
||||||
it. And you must show them these terms so they know their rights.
|
|
||||||
|
|
||||||
We protect your rights with a two-step method: (1) we copyright the
|
|
||||||
library, and (2) we offer you this license, which gives you legal
|
|
||||||
permission to copy, distribute and/or modify the library.
|
|
||||||
|
|
||||||
To protect each distributor, we want to make it very clear that
|
|
||||||
there is no warranty for the free library. Also, if the library is
|
|
||||||
modified by someone else and passed on, the recipients should know
|
|
||||||
that what they have is not the original version, so that the original
|
|
||||||
author's reputation will not be affected by problems that might be
|
|
||||||
introduced by others.
|
|
||||||
|
|
||||||
Finally, software patents pose a constant threat to the existence of
|
|
||||||
any free program. We wish to make sure that a company cannot
|
|
||||||
effectively restrict the users of a free program by obtaining a
|
|
||||||
restrictive license from a patent holder. Therefore, we insist that
|
|
||||||
any patent license obtained for a version of the library must be
|
|
||||||
consistent with the full freedom of use specified in this license.
|
|
||||||
|
|
||||||
Most GNU software, including some libraries, is covered by the
|
|
||||||
ordinary GNU General Public License. This license, the GNU Lesser
|
|
||||||
General Public License, applies to certain designated libraries, and
|
|
||||||
is quite different from the ordinary General Public License. We use
|
|
||||||
this license for certain libraries in order to permit linking those
|
|
||||||
libraries into non-free programs.
|
|
||||||
|
|
||||||
When a program is linked with a library, whether statically or using
|
|
||||||
a shared library, the combination of the two is legally speaking a
|
|
||||||
combined work, a derivative of the original library. The ordinary
|
|
||||||
General Public License therefore permits such linking only if the
|
|
||||||
entire combination fits its criteria of freedom. The Lesser General
|
|
||||||
Public License permits more lax criteria for linking other code with
|
|
||||||
the library.
|
|
||||||
|
|
||||||
We call this license the "Lesser" General Public License because it
|
|
||||||
does Less to protect the user's freedom than the ordinary General
|
|
||||||
Public License. It also provides other free software developers Less
|
|
||||||
of an advantage over competing non-free programs. These disadvantages
|
|
||||||
are the reason we use the ordinary General Public License for many
|
|
||||||
libraries. However, the Lesser license provides advantages in certain
|
|
||||||
special circumstances.
|
|
||||||
|
|
||||||
For example, on rare occasions, there may be a special need to
|
|
||||||
encourage the widest possible use of a certain library, so that it becomes
|
|
||||||
a de-facto standard. To achieve this, non-free programs must be
|
|
||||||
allowed to use the library. A more frequent case is that a free
|
|
||||||
library does the same job as widely used non-free libraries. In this
|
|
||||||
case, there is little to gain by limiting the free library to free
|
|
||||||
software only, so we use the Lesser General Public License.
|
|
||||||
|
|
||||||
In other cases, permission to use a particular library in non-free
|
|
||||||
programs enables a greater number of people to use a large body of
|
|
||||||
free software. For example, permission to use the GNU C Library in
|
|
||||||
non-free programs enables many more people to use the whole GNU
|
|
||||||
operating system, as well as its variant, the GNU/Linux operating
|
|
||||||
system.
|
|
||||||
|
|
||||||
Although the Lesser General Public License is Less protective of the
|
|
||||||
users' freedom, it does ensure that the user of a program that is
|
|
||||||
linked with the Library has the freedom and the wherewithal to run
|
|
||||||
that program using a modified version of the Library.
|
|
||||||
|
|
||||||
The precise terms and conditions for copying, distribution and
|
|
||||||
modification follow. Pay close attention to the difference between a
|
|
||||||
"work based on the library" and a "work that uses the library". The
|
|
||||||
former contains code derived from the library, whereas the latter must
|
|
||||||
be combined with the library in order to run.
|
|
||||||
|
|
||||||
GNU LESSER GENERAL PUBLIC LICENSE
|
|
||||||
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
|
|
||||||
|
|
||||||
0. This License Agreement applies to any software library or other
|
|
||||||
program which contains a notice placed by the copyright holder or
|
|
||||||
other authorized party saying it may be distributed under the terms of
|
|
||||||
this Lesser General Public License (also called "this License").
|
|
||||||
Each licensee is addressed as "you".
|
|
||||||
|
|
||||||
A "library" means a collection of software functions and/or data
|
|
||||||
prepared so as to be conveniently linked with application programs
|
|
||||||
(which use some of those functions and data) to form executables.
|
|
||||||
|
|
||||||
The "Library", below, refers to any such software library or work
|
|
||||||
which has been distributed under these terms. A "work based on the
|
|
||||||
Library" means either the Library or any derivative work under
|
|
||||||
copyright law: that is to say, a work containing the Library or a
|
|
||||||
portion of it, either verbatim or with modifications and/or translated
|
|
||||||
straightforwardly into another language. (Hereinafter, translation is
|
|
||||||
included without limitation in the term "modification".)
|
|
||||||
|
|
||||||
"Source code" for a work means the preferred form of the work for
|
|
||||||
making modifications to it. For a library, complete source code means
|
|
||||||
all the source code for all modules it contains, plus any associated
|
|
||||||
interface definition files, plus the scripts used to control compilation
|
|
||||||
and installation of the library.
|
|
||||||
|
|
||||||
Activities other than copying, distribution and modification are not
|
|
||||||
covered by this License; they are outside its scope. The act of
|
|
||||||
running a program using the Library is not restricted, and output from
|
|
||||||
such a program is covered only if its contents constitute a work based
|
|
||||||
on the Library (independent of the use of the Library in a tool for
|
|
||||||
writing it). Whether that is true depends on what the Library does
|
|
||||||
and what the program that uses the Library does.
|
|
||||||
|
|
||||||
1. You may copy and distribute verbatim copies of the Library's
|
|
||||||
complete source code as you receive it, in any medium, provided that
|
|
||||||
you conspicuously and appropriately publish on each copy an
|
|
||||||
appropriate copyright notice and disclaimer of warranty; keep intact
|
|
||||||
all the notices that refer to this License and to the absence of any
|
|
||||||
warranty; and distribute a copy of this License along with the
|
|
||||||
Library.
|
|
||||||
|
|
||||||
You may charge a fee for the physical act of transferring a copy,
|
|
||||||
and you may at your option offer warranty protection in exchange for a
|
|
||||||
fee.
|
|
||||||
|
|
||||||
2. You may modify your copy or copies of the Library or any portion
|
|
||||||
of it, thus forming a work based on the Library, and copy and
|
|
||||||
distribute such modifications or work under the terms of Section 1
|
|
||||||
above, provided that you also meet all of these conditions:
|
|
||||||
|
|
||||||
a) The modified work must itself be a software library.
|
|
||||||
|
|
||||||
b) You must cause the files modified to carry prominent notices
|
|
||||||
stating that you changed the files and the date of any change.
|
|
||||||
|
|
||||||
c) You must cause the whole of the work to be licensed at no
|
|
||||||
charge to all third parties under the terms of this License.
|
|
||||||
|
|
||||||
d) If a facility in the modified Library refers to a function or a
|
|
||||||
table of data to be supplied by an application program that uses
|
|
||||||
the facility, other than as an argument passed when the facility
|
|
||||||
is invoked, then you must make a good faith effort to ensure that,
|
|
||||||
in the event an application does not supply such function or
|
|
||||||
table, the facility still operates, and performs whatever part of
|
|
||||||
its purpose remains meaningful.
|
|
||||||
|
|
||||||
(For example, a function in a library to compute square roots has
|
|
||||||
a purpose that is entirely well-defined independent of the
|
|
||||||
application. Therefore, Subsection 2d requires that any
|
|
||||||
application-supplied function or table used by this function must
|
|
||||||
be optional: if the application does not supply it, the square
|
|
||||||
root function must still compute square roots.)
|
|
||||||
|
|
||||||
These requirements apply to the modified work as a whole. If
|
|
||||||
identifiable sections of that work are not derived from the Library,
|
|
||||||
and can be reasonably considered independent and separate works in
|
|
||||||
themselves, then this License, and its terms, do not apply to those
|
|
||||||
sections when you distribute them as separate works. But when you
|
|
||||||
distribute the same sections as part of a whole which is a work based
|
|
||||||
on the Library, the distribution of the whole must be on the terms of
|
|
||||||
this License, whose permissions for other licensees extend to the
|
|
||||||
entire whole, and thus to each and every part regardless of who wrote
|
|
||||||
it.
|
|
||||||
|
|
||||||
Thus, it is not the intent of this section to claim rights or contest
|
|
||||||
your rights to work written entirely by you; rather, the intent is to
|
|
||||||
exercise the right to control the distribution of derivative or
|
|
||||||
collective works based on the Library.
|
|
||||||
|
|
||||||
In addition, mere aggregation of another work not based on the Library
|
|
||||||
with the Library (or with a work based on the Library) on a volume of
|
|
||||||
a storage or distribution medium does not bring the other work under
|
|
||||||
the scope of this License.
|
|
||||||
|
|
||||||
3. You may opt to apply the terms of the ordinary GNU General Public
|
|
||||||
License instead of this License to a given copy of the Library. To do
|
|
||||||
this, you must alter all the notices that refer to this License, so
|
|
||||||
that they refer to the ordinary GNU General Public License, version 2,
|
|
||||||
instead of to this License. (If a newer version than version 2 of the
|
|
||||||
ordinary GNU General Public License has appeared, then you can specify
|
|
||||||
that version instead if you wish.) Do not make any other change in
|
|
||||||
these notices.
|
|
||||||
|
|
||||||
Once this change is made in a given copy, it is irreversible for
|
|
||||||
that copy, so the ordinary GNU General Public License applies to all
|
|
||||||
subsequent copies and derivative works made from that copy.
|
|
||||||
|
|
||||||
This option is useful when you wish to copy part of the code of
|
|
||||||
the Library into a program that is not a library.
|
|
||||||
|
|
||||||
4. You may copy and distribute the Library (or a portion or
|
|
||||||
derivative of it, under Section 2) in object code or executable form
|
|
||||||
under the terms of Sections 1 and 2 above provided that you accompany
|
|
||||||
it with the complete corresponding machine-readable source code, which
|
|
||||||
must be distributed under the terms of Sections 1 and 2 above on a
|
|
||||||
medium customarily used for software interchange.
|
|
||||||
|
|
||||||
If distribution of object code is made by offering access to copy
|
|
||||||
from a designated place, then offering equivalent access to copy the
|
|
||||||
source code from the same place satisfies the requirement to
|
|
||||||
distribute the source code, even though third parties are not
|
|
||||||
compelled to copy the source along with the object code.
|
|
||||||
|
|
||||||
5. A program that contains no derivative of any portion of the
|
|
||||||
Library, but is designed to work with the Library by being compiled or
|
|
||||||
linked with it, is called a "work that uses the Library". Such a
|
|
||||||
work, in isolation, is not a derivative work of the Library, and
|
|
||||||
therefore falls outside the scope of this License.
|
|
||||||
|
|
||||||
However, linking a "work that uses the Library" with the Library
|
|
||||||
creates an executable that is a derivative of the Library (because it
|
|
||||||
contains portions of the Library), rather than a "work that uses the
|
|
||||||
library". The executable is therefore covered by this License.
|
|
||||||
Section 6 states terms for distribution of such executables.
|
|
||||||
|
|
||||||
When a "work that uses the Library" uses material from a header file
|
|
||||||
that is part of the Library, the object code for the work may be a
|
|
||||||
derivative work of the Library even though the source code is not.
|
|
||||||
Whether this is true is especially significant if the work can be
|
|
||||||
linked without the Library, or if the work is itself a library. The
|
|
||||||
threshold for this to be true is not precisely defined by law.
|
|
||||||
|
|
||||||
If such an object file uses only numerical parameters, data
|
|
||||||
structure layouts and accessors, and small macros and small inline
|
|
||||||
functions (ten lines or less in length), then the use of the object
|
|
||||||
file is unrestricted, regardless of whether it is legally a derivative
|
|
||||||
work. (Executables containing this object code plus portions of the
|
|
||||||
Library will still fall under Section 6.)
|
|
||||||
|
|
||||||
Otherwise, if the work is a derivative of the Library, you may
|
|
||||||
distribute the object code for the work under the terms of Section 6.
|
|
||||||
Any executables containing that work also fall under Section 6,
|
|
||||||
whether or not they are linked directly with the Library itself.
|
|
||||||
|
|
||||||
6. As an exception to the Sections above, you may also combine or
|
|
||||||
link a "work that uses the Library" with the Library to produce a
|
|
||||||
work containing portions of the Library, and distribute that work
|
|
||||||
under terms of your choice, provided that the terms permit
|
|
||||||
modification of the work for the customer's own use and reverse
|
|
||||||
engineering for debugging such modifications.
|
|
||||||
|
|
||||||
You must give prominent notice with each copy of the work that the
|
|
||||||
Library is used in it and that the Library and its use are covered by
|
|
||||||
this License. You must supply a copy of this License. If the work
|
|
||||||
during execution displays copyright notices, you must include the
|
|
||||||
copyright notice for the Library among them, as well as a reference
|
|
||||||
directing the user to the copy of this License. Also, you must do one
|
|
||||||
of these things:
|
|
||||||
|
|
||||||
a) Accompany the work with the complete corresponding
|
|
||||||
machine-readable source code for the Library including whatever
|
|
||||||
changes were used in the work (which must be distributed under
|
|
||||||
Sections 1 and 2 above); and, if the work is an executable linked
|
|
||||||
with the Library, with the complete machine-readable "work that
|
|
||||||
uses the Library", as object code and/or source code, so that the
|
|
||||||
user can modify the Library and then relink to produce a modified
|
|
||||||
executable containing the modified Library. (It is understood
|
|
||||||
that the user who changes the contents of definitions files in the
|
|
||||||
Library will not necessarily be able to recompile the application
|
|
||||||
to use the modified definitions.)
|
|
||||||
|
|
||||||
b) Use a suitable shared library mechanism for linking with the
|
|
||||||
Library. A suitable mechanism is one that (1) uses at run time a
|
|
||||||
copy of the library already present on the user's computer system,
|
|
||||||
rather than copying library functions into the executable, and (2)
|
|
||||||
will operate properly with a modified version of the library, if
|
|
||||||
the user installs one, as long as the modified version is
|
|
||||||
interface-compatible with the version that the work was made with.
|
|
||||||
|
|
||||||
c) Accompany the work with a written offer, valid for at
|
|
||||||
least three years, to give the same user the materials
|
|
||||||
specified in Subsection 6a, above, for a charge no more
|
|
||||||
than the cost of performing this distribution.
|
|
||||||
|
|
||||||
d) If distribution of the work is made by offering access to copy
|
|
||||||
from a designated place, offer equivalent access to copy the above
|
|
||||||
specified materials from the same place.
|
|
||||||
|
|
||||||
e) Verify that the user has already received a copy of these
|
|
||||||
materials or that you have already sent this user a copy.
|
|
||||||
|
|
||||||
For an executable, the required form of the "work that uses the
|
|
||||||
Library" must include any data and utility programs needed for
|
|
||||||
reproducing the executable from it. However, as a special exception,
|
|
||||||
the materials to be distributed need not include anything that is
|
|
||||||
normally distributed (in either source or binary form) with the major
|
|
||||||
components (compiler, kernel, and so on) of the operating system on
|
|
||||||
which the executable runs, unless that component itself accompanies
|
|
||||||
the executable.
|
|
||||||
|
|
||||||
It may happen that this requirement contradicts the license
|
|
||||||
restrictions of other proprietary libraries that do not normally
|
|
||||||
accompany the operating system. Such a contradiction means you cannot
|
|
||||||
use both them and the Library together in an executable that you
|
|
||||||
distribute.
|
|
||||||
|
|
||||||
7. You may place library facilities that are a work based on the
|
|
||||||
Library side-by-side in a single library together with other library
|
|
||||||
facilities not covered by this License, and distribute such a combined
|
|
||||||
library, provided that the separate distribution of the work based on
|
|
||||||
the Library and of the other library facilities is otherwise
|
|
||||||
permitted, and provided that you do these two things:
|
|
||||||
|
|
||||||
a) Accompany the combined library with a copy of the same work
|
|
||||||
based on the Library, uncombined with any other library
|
|
||||||
facilities. This must be distributed under the terms of the
|
|
||||||
Sections above.
|
|
||||||
|
|
||||||
b) Give prominent notice with the combined library of the fact
|
|
||||||
that part of it is a work based on the Library, and explaining
|
|
||||||
where to find the accompanying uncombined form of the same work.
|
|
||||||
|
|
||||||
8. You may not copy, modify, sublicense, link with, or distribute
|
|
||||||
the Library except as expressly provided under this License. Any
|
|
||||||
attempt otherwise to copy, modify, sublicense, link with, or
|
|
||||||
distribute the Library is void, and will automatically terminate your
|
|
||||||
rights under this License. However, parties who have received copies,
|
|
||||||
or rights, from you under this License will not have their licenses
|
|
||||||
terminated so long as such parties remain in full compliance.
|
|
||||||
|
|
||||||
9. You are not required to accept this License, since you have not
|
|
||||||
signed it. However, nothing else grants you permission to modify or
|
|
||||||
distribute the Library or its derivative works. These actions are
|
|
||||||
prohibited by law if you do not accept this License. Therefore, by
|
|
||||||
modifying or distributing the Library (or any work based on the
|
|
||||||
Library), you indicate your acceptance of this License to do so, and
|
|
||||||
all its terms and conditions for copying, distributing or modifying
|
|
||||||
the Library or works based on it.
|
|
||||||
|
|
||||||
10. Each time you redistribute the Library (or any work based on the
|
|
||||||
Library), the recipient automatically receives a license from the
|
|
||||||
original licensor to copy, distribute, link with or modify the Library
|
|
||||||
subject to these terms and conditions. You may not impose any further
|
|
||||||
restrictions on the recipients' exercise of the rights granted herein.
|
|
||||||
You are not responsible for enforcing compliance by third parties with
|
|
||||||
this License.
|
|
||||||
|
|
||||||
11. If, as a consequence of a court judgment or allegation of patent
|
|
||||||
infringement or for any other reason (not limited to patent issues),
|
|
||||||
conditions are imposed on you (whether by court order, agreement or
|
|
||||||
otherwise) that contradict the conditions of this License, they do not
|
|
||||||
excuse you from the conditions of this License. If you cannot
|
|
||||||
distribute so as to satisfy simultaneously your obligations under this
|
|
||||||
License and any other pertinent obligations, then as a consequence you
|
|
||||||
may not distribute the Library at all. For example, if a patent
|
|
||||||
license would not permit royalty-free redistribution of the Library by
|
|
||||||
all those who receive copies directly or indirectly through you, then
|
|
||||||
the only way you could satisfy both it and this License would be to
|
|
||||||
refrain entirely from distribution of the Library.
|
|
||||||
|
|
||||||
If any portion of this section is held invalid or unenforceable under any
|
|
||||||
particular circumstance, the balance of the section is intended to apply,
|
|
||||||
and the section as a whole is intended to apply in other circumstances.
|
|
||||||
|
|
||||||
It is not the purpose of this section to induce you to infringe any
|
|
||||||
patents or other property right claims or to contest validity of any
|
|
||||||
such claims; this section has the sole purpose of protecting the
|
|
||||||
integrity of the free software distribution system which is
|
|
||||||
implemented by public license practices. Many people have made
|
|
||||||
generous contributions to the wide range of software distributed
|
|
||||||
through that system in reliance on consistent application of that
|
|
||||||
system; it is up to the author/donor to decide if he or she is willing
|
|
||||||
to distribute software through any other system and a licensee cannot
|
|
||||||
impose that choice.
|
|
||||||
|
|
||||||
This section is intended to make thoroughly clear what is believed to
|
|
||||||
be a consequence of the rest of this License.
|
|
||||||
|
|
||||||
12. If the distribution and/or use of the Library is restricted in
|
|
||||||
certain countries either by patents or by copyrighted interfaces, the
|
|
||||||
original copyright holder who places the Library under this License may add
|
|
||||||
an explicit geographical distribution limitation excluding those countries,
|
|
||||||
so that distribution is permitted only in or among countries not thus
|
|
||||||
excluded. In such case, this License incorporates the limitation as if
|
|
||||||
written in the body of this License.
|
|
||||||
|
|
||||||
13. The Free Software Foundation may publish revised and/or new
|
|
||||||
versions of the Lesser General Public License from time to time.
|
|
||||||
Such new versions will be similar in spirit to the present version,
|
|
||||||
but may differ in detail to address new problems or concerns.
|
|
||||||
|
|
||||||
Each version is given a distinguishing version number. If the Library
|
|
||||||
specifies a version number of this License which applies to it and
|
|
||||||
"any later version", you have the option of following the terms and
|
|
||||||
conditions either of that version or of any later version published by
|
|
||||||
the Free Software Foundation. If the Library does not specify a
|
|
||||||
license version number, you may choose any version ever published by
|
|
||||||
the Free Software Foundation.
|
|
||||||
|
|
||||||
14. If you wish to incorporate parts of the Library into other free
|
|
||||||
programs whose distribution conditions are incompatible with these,
|
|
||||||
write to the author to ask for permission. For software which is
|
|
||||||
copyrighted by the Free Software Foundation, write to the Free
|
|
||||||
Software Foundation; we sometimes make exceptions for this. Our
|
|
||||||
decision will be guided by the two goals of preserving the free status
|
|
||||||
of all derivatives of our free software and of promoting the sharing
|
|
||||||
and reuse of software generally.
|
|
||||||
|
|
||||||
NO WARRANTY
|
|
||||||
|
|
||||||
15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
|
|
||||||
WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
|
|
||||||
EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
|
|
||||||
OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
|
|
||||||
KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
|
|
||||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
|
||||||
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
|
|
||||||
LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME
|
|
||||||
THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
|
||||||
|
|
||||||
16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
|
|
||||||
WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
|
|
||||||
AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU
|
|
||||||
FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
|
|
||||||
CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
|
|
||||||
LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
|
|
||||||
RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
|
|
||||||
FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
|
|
||||||
SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
|
|
||||||
DAMAGES.
|
|
||||||
|
|
||||||
END OF TERMS AND CONDITIONS
|
|
||||||
|
|
||||||
How to Apply These Terms to Your New Libraries
|
|
||||||
|
|
||||||
If you develop a new library, and you want it to be of the greatest
|
|
||||||
possible use to the public, we recommend making it free software that
|
|
||||||
everyone can redistribute and change. You can do so by permitting
|
|
||||||
redistribution under these terms (or, alternatively, under the terms of the
|
|
||||||
ordinary General Public License).
|
|
||||||
|
|
||||||
To apply these terms, attach the following notices to the library. It is
|
|
||||||
safest to attach them to the start of each source file to most effectively
|
|
||||||
convey the exclusion of warranty; and each file should have at least the
|
|
||||||
"copyright" line and a pointer to where the full notice is found.
|
|
||||||
|
|
||||||
<one line to give the library's name and a brief idea of what it does.>
|
|
||||||
Copyright (C) <year> <name of author>
|
|
||||||
|
|
||||||
This library is free software; you can redistribute it and/or
|
|
||||||
modify it under the terms of the GNU Lesser General Public
|
|
||||||
License as published by the Free Software Foundation; either
|
|
||||||
version 2.1 of the License, or (at your option) any later version.
|
|
||||||
|
|
||||||
This library is distributed in the hope that it will be useful,
|
|
||||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
|
||||||
Lesser General Public License for more details.
|
|
||||||
|
|
||||||
You should have received a copy of the GNU Lesser General Public
|
|
||||||
License along with this library; if not, write to the Free Software
|
|
||||||
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
|
||||||
|
|
||||||
Also add information on how to contact you by electronic and paper mail.
|
|
||||||
|
|
||||||
You should also get your employer (if you work as a programmer) or your
|
|
||||||
school, if any, to sign a "copyright disclaimer" for the library, if
|
|
||||||
necessary. Here is a sample; alter the names:
|
|
||||||
|
|
||||||
Yoyodyne, Inc., hereby disclaims all copyright interest in the
|
|
||||||
library `Frob' (a library for tweaking knobs) written by James Random Hacker.
|
|
||||||
|
|
||||||
<signature of Ty Coon>, 1 April 1990
|
|
||||||
Ty Coon, President of Vice
|
|
||||||
|
|
||||||
That's all there is to it!
|
|
||||||
|
|
||||||
|
|
|
@ -1,123 +0,0 @@
|
||||||
= CodeRay
|
|
||||||
|
|
||||||
Tired of blue'n'gray? Try the original version of this documentation on
|
|
||||||
coderay.rubychan.de[http://coderay.rubychan.de/doc/] :-)
|
|
||||||
|
|
||||||
== About
|
|
||||||
|
|
||||||
CodeRay is a Ruby library for syntax highlighting.
|
|
||||||
|
|
||||||
You put your code in, and you get it back colored; Keywords, strings,
|
|
||||||
floats, comments - all in different colors. And with line numbers.
|
|
||||||
|
|
||||||
*Syntax* *Highlighting*...
|
|
||||||
* makes code easier to read and maintain
|
|
||||||
* lets you detect syntax errors faster
|
|
||||||
* helps you to understand the syntax of a language
|
|
||||||
* looks nice
|
|
||||||
* is what everybody wants to have on their website
|
|
||||||
* solves all your problems and makes the girls run after you
|
|
||||||
|
|
||||||
|
|
||||||
== Installation
|
|
||||||
|
|
||||||
% gem install coderay
|
|
||||||
|
|
||||||
|
|
||||||
=== Dependencies
|
|
||||||
|
|
||||||
CodeRay needs Ruby 1.8.7+ or 1.9.2+. It also runs on Rubinius and JRuby.
|
|
||||||
|
|
||||||
|
|
||||||
== Example Usage
|
|
||||||
|
|
||||||
require 'coderay'
|
|
||||||
|
|
||||||
html = CodeRay.scan("puts 'Hello, world!'", :ruby).div(:line_numbers => :table)
|
|
||||||
|
|
||||||
|
|
||||||
== Documentation
|
|
||||||
|
|
||||||
See CodeRay.
|
|
||||||
|
|
||||||
|
|
||||||
== Credits
|
|
||||||
|
|
||||||
=== Special Thanks to
|
|
||||||
|
|
||||||
* licenser (Heinz N. Gies) for ending my QBasic career, inventing the Coder
|
|
||||||
project and the input/output plugin system.
|
|
||||||
CodeRay would not exist without him.
|
|
||||||
* bovi (Daniel Bovensiepen) for helping me out on various occasions.
|
|
||||||
|
|
||||||
=== Thanks to
|
|
||||||
|
|
||||||
* Caleb Clausen for writing RubyLexer (see
|
|
||||||
http://rubyforge.org/projects/rubylexer) and lots of very interesting mail
|
|
||||||
traffic
|
|
||||||
* birkenfeld (Georg Brandl) and mitsuhiku (Arnim Ronacher) for PyKleur, now pygments.
|
|
||||||
You guys rock!
|
|
||||||
* Jamis Buck for writing Syntax (see http://rubyforge.org/projects/syntax)
|
|
||||||
I got some useful ideas from it.
|
|
||||||
* Doug Kearns and everyone else who worked on ruby.vim - it not only helped me
|
|
||||||
coding CodeRay, but also gave me a wonderful target to reach for the Ruby
|
|
||||||
scanner.
|
|
||||||
* everyone who uses CodeBB on http://www.rubyforen.de and http://www.python-forum.de
|
|
||||||
* iGEL, magichisoka, manveru, WoNáDo and everyone I forgot from rubyforen.de
|
|
||||||
* Dethix from ruby-mine.de
|
|
||||||
* zickzackw
|
|
||||||
* Dookie (who is no longer with us...) and Leonidas from http://www.python-forum.de
|
|
||||||
* Andreas Schwarz for finding out that CaseIgnoringWordList was not case
|
|
||||||
ignoring! Such things really make you write tests.
|
|
||||||
* closure for the first version of the Scheme scanner.
|
|
||||||
* Stefan Walk for the first version of the JavaScript and PHP scanners.
|
|
||||||
* Josh Goebel for another version of the JavaScript scanner, a SQL and a Diff scanner.
|
|
||||||
* Jonathan Younger for pointing out the licence confusion caused by wrong LICENSE file.
|
|
||||||
* Jeremy Hinegardner for finding the shebang-on-empty-file bug in FileType.
|
|
||||||
* Charles Oliver Nutter and Yehuda Katz for helping me benchmark CodeRay on JRuby.
|
|
||||||
* Andreas Neuhaus for pointing out a markup bug in coderay/for_redcloth.
|
|
||||||
* 0xf30fc7 for the FileType patch concerning Delphi file extensions.
|
|
||||||
* The folks at redmine.org - thank you for using and fixing CodeRay!
|
|
||||||
* Keith Pitt for his SQL scanners
|
|
||||||
* Rob Aldred for the terminal encoder
|
|
||||||
* Trans for pointing out $DEBUG dependencies
|
|
||||||
* Flameeyes for finding that Term::ANSIColor was obsolete
|
|
||||||
* matz and all Ruby gods and gurus
|
|
||||||
* The inventors of: the computer, the internet, the true color display, HTML &
|
|
||||||
CSS, VIM, Ruby, pizza, microwaves, guitars, scouting, programming, anime,
|
|
||||||
manga, coke and green ice tea.
|
|
||||||
|
|
||||||
Where would we be without all those people?
|
|
||||||
|
|
||||||
=== Created using
|
|
||||||
|
|
||||||
* Ruby[http://ruby-lang.org/]
|
|
||||||
* Chihiro (my Sony VAIO laptop); Henrietta (my old MacBook);
|
|
||||||
Triella, born Rico (my new MacBook); as well as
|
|
||||||
Seras and Hikari (my PCs)
|
|
||||||
* RDE[http://homepage2.nifty.com/sakazuki/rde_e.html],
|
|
||||||
VIM[http://vim.org] and TextMate[http://macromates.com]
|
|
||||||
* Subversion[http://subversion.tigris.org/]
|
|
||||||
* Redmine[http://redmine.org/]
|
|
||||||
* Firefox[http://www.mozilla.org/products/firefox/],
|
|
||||||
Firebug[http://getfirebug.com/], Safari[http://www.apple.com/safari/], and
|
|
||||||
Thunderbird[http://www.mozilla.org/products/thunderbird/]
|
|
||||||
* RubyGems[http://docs.rubygems.org/] and Rake[http://rake.rubyforge.org/]
|
|
||||||
* TortoiseSVN[http://tortoisesvn.tigris.org/] using Apache via
|
|
||||||
XAMPP[http://www.apachefriends.org/en/xampp.html]
|
|
||||||
* RDoc (though I'm quite unsatisfied with it)
|
|
||||||
* Microsoft Windows (yes, I confess!) and MacOS X
|
|
||||||
* GNUWin32, MinGW and some other tools to make the shell under windows a bit
|
|
||||||
less useless
|
|
||||||
* Term::ANSIColor[http://term-ansicolor.rubyforge.org/]
|
|
||||||
* PLEAC[http://pleac.sourceforge.net/] code examples
|
|
||||||
* Github
|
|
||||||
* Travis CI (http://travis-ci.org/rubychan/github)
|
|
||||||
|
|
||||||
=== Free
|
|
||||||
|
|
||||||
* As you can see, CodeRay was created under heavy use of *free* software.
|
|
||||||
* So CodeRay is also *free*.
|
|
||||||
* If you use CodeRay to create software, think about making this software
|
|
||||||
*free*, too.
|
|
||||||
* Thanks :)
|
|
|
@ -1,35 +0,0 @@
|
||||||
$:.unshift File.dirname(__FILE__) unless $:.include? '.'
|
|
||||||
|
|
||||||
ROOT = '.'
|
|
||||||
LIB_ROOT = File.join ROOT, 'lib'
|
|
||||||
|
|
||||||
task :default => :test
|
|
||||||
|
|
||||||
if File.directory? 'rake_tasks'
|
|
||||||
|
|
||||||
# load rake tasks from subfolder
|
|
||||||
for task_file in Dir['rake_tasks/*.rake'].sort
|
|
||||||
load task_file
|
|
||||||
end
|
|
||||||
|
|
||||||
else
|
|
||||||
|
|
||||||
# fallback tasks when rake_tasks folder is not present (eg. in the distribution package)
|
|
||||||
desc 'Run CodeRay tests (basic)'
|
|
||||||
task :test do
|
|
||||||
ruby './test/functional/suite.rb'
|
|
||||||
ruby './test/functional/for_redcloth.rb'
|
|
||||||
end
|
|
||||||
|
|
||||||
gem 'rdoc' if defined? gem
|
|
||||||
require 'rdoc/task'
|
|
||||||
desc 'Generate documentation for CodeRay'
|
|
||||||
Rake::RDocTask.new :doc do |rd|
|
|
||||||
rd.title = 'CodeRay Documentation'
|
|
||||||
rd.main = 'README_INDEX.rdoc'
|
|
||||||
rd.rdoc_files.add Dir['lib']
|
|
||||||
rd.rdoc_files.add rd.main
|
|
||||||
rd.rdoc_dir = 'doc'
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
|
@ -1,215 +0,0 @@
|
||||||
#!/usr/bin/env ruby
|
|
||||||
require 'coderay'
|
|
||||||
|
|
||||||
$options, args = ARGV.partition { |arg| arg[/^-[hv]$|--\w+/] }
|
|
||||||
subcommand = args.first if /^\w/ === args.first
|
|
||||||
subcommand = nil if subcommand && File.exist?(subcommand)
|
|
||||||
args.delete subcommand
|
|
||||||
|
|
||||||
def option? *options
|
|
||||||
!($options & options).empty?
|
|
||||||
end
|
|
||||||
|
|
||||||
def tty?
|
|
||||||
$stdout.tty? || option?('--tty')
|
|
||||||
end
|
|
||||||
|
|
||||||
def version
|
|
||||||
puts <<-USAGE
|
|
||||||
CodeRay #{CodeRay::VERSION}
|
|
||||||
USAGE
|
|
||||||
end
|
|
||||||
|
|
||||||
def help
|
|
||||||
puts <<-HELP
|
|
||||||
This is CodeRay #{CodeRay::VERSION}, a syntax highlighting tool for selected languages.
|
|
||||||
|
|
||||||
usage:
|
|
||||||
coderay [-language] [input] [-format] [output]
|
|
||||||
|
|
||||||
defaults:
|
|
||||||
language detect from input file name or shebang; fall back to plain text
|
|
||||||
input STDIN
|
|
||||||
format detect from output file name or use terminal; fall back to HTML
|
|
||||||
output STDOUT
|
|
||||||
|
|
||||||
common:
|
|
||||||
coderay file.rb # highlight file to terminal
|
|
||||||
coderay file.rb > file.html # highlight file to HTML page
|
|
||||||
coderay file.rb -div > file.html # highlight file to HTML snippet
|
|
||||||
|
|
||||||
configure output:
|
|
||||||
coderay file.py output.json # output tokens as JSON
|
|
||||||
coderay file.py -loc # count lines of code in Python file
|
|
||||||
|
|
||||||
configure input:
|
|
||||||
coderay -python file # specify the input language
|
|
||||||
coderay -ruby # take input from STDIN
|
|
||||||
|
|
||||||
more:
|
|
||||||
coderay stylesheet [style] # print CSS stylesheet
|
|
||||||
HELP
|
|
||||||
end
|
|
||||||
|
|
||||||
def commands
|
|
||||||
puts <<-COMMANDS
|
|
||||||
general:
|
|
||||||
highlight code highlighting (default command, optional)
|
|
||||||
stylesheet print the CSS stylesheet with the given name (aliases: style, css)
|
|
||||||
|
|
||||||
about:
|
|
||||||
list [of] list all available plugins (or just the scanners|encoders|styles|filetypes)
|
|
||||||
commands print this list
|
|
||||||
help show some help
|
|
||||||
version print CodeRay version
|
|
||||||
COMMANDS
|
|
||||||
end
|
|
||||||
|
|
||||||
def print_list_of plugin_host
|
|
||||||
plugins = plugin_host.all_plugins.map do |plugin|
|
|
||||||
info = " #{plugin.plugin_id}: #{plugin.title}"
|
|
||||||
|
|
||||||
aliases = (plugin.aliases - [:default]).map { |key| "-#{key}" }.sort_by { |key| key.size }
|
|
||||||
if plugin.respond_to?(:file_extension) || !aliases.empty?
|
|
||||||
additional_info = []
|
|
||||||
additional_info << aliases.join(', ') unless aliases.empty?
|
|
||||||
info << " (#{additional_info.join('; ')})"
|
|
||||||
end
|
|
||||||
|
|
||||||
info << ' <-- default' if plugin.aliases.include? :default
|
|
||||||
|
|
||||||
info
|
|
||||||
end
|
|
||||||
puts plugins.sort
|
|
||||||
end
|
|
||||||
|
|
||||||
if option? '-v', '--version'
|
|
||||||
version
|
|
||||||
end
|
|
||||||
|
|
||||||
if option? '-h', '--help'
|
|
||||||
help
|
|
||||||
end
|
|
||||||
|
|
||||||
case subcommand
|
|
||||||
when 'highlight', nil
|
|
||||||
if ARGV.empty?
|
|
||||||
version
|
|
||||||
help
|
|
||||||
else
|
|
||||||
signature = args.map { |arg| arg[/^-/] ? '-' : 'f' }.join
|
|
||||||
names = args.map { |arg| arg.sub(/^-/, '') }
|
|
||||||
case signature
|
|
||||||
when /^$/
|
|
||||||
exit
|
|
||||||
when /^ff?$/
|
|
||||||
input_file, output_file, = *names
|
|
||||||
when /^f-f?$/
|
|
||||||
input_file, output_format, output_file, = *names
|
|
||||||
when /^-ff?$/
|
|
||||||
input_lang, input_file, output_file, = *names
|
|
||||||
when /^-f-f?$/
|
|
||||||
input_lang, input_file, output_format, output_file, = *names
|
|
||||||
when /^--?f?$/
|
|
||||||
input_lang, output_format, output_file, = *names
|
|
||||||
else
|
|
||||||
$stdout = $stderr
|
|
||||||
help
|
|
||||||
puts
|
|
||||||
puts "Unknown parameter order: #{args.join ' '}, expected: [-language] [input] [-format] [output]"
|
|
||||||
exit 1
|
|
||||||
end
|
|
||||||
|
|
||||||
if input_file
|
|
||||||
input_lang ||= CodeRay::FileType.fetch input_file, :text, true
|
|
||||||
end
|
|
||||||
|
|
||||||
if output_file
|
|
||||||
output_format ||= CodeRay::FileType[output_file]
|
|
||||||
else
|
|
||||||
output_format ||= :terminal
|
|
||||||
end
|
|
||||||
|
|
||||||
output_format = :page if output_format.to_s == 'html'
|
|
||||||
|
|
||||||
if input_file
|
|
||||||
input = File.read input_file
|
|
||||||
else
|
|
||||||
input = $stdin.read
|
|
||||||
end
|
|
||||||
|
|
||||||
begin
|
|
||||||
file =
|
|
||||||
if output_file
|
|
||||||
File.open output_file, 'w'
|
|
||||||
else
|
|
||||||
$stdout.sync = true
|
|
||||||
$stdout
|
|
||||||
end
|
|
||||||
CodeRay.encode(input, input_lang, output_format, :out => file)
|
|
||||||
file.puts
|
|
||||||
rescue CodeRay::PluginHost::PluginNotFound => boom
|
|
||||||
$stdout = $stderr
|
|
||||||
if boom.message[/CodeRay::(\w+)s could not load plugin :?(.*?): /]
|
|
||||||
puts "I don't know the #$1 \"#$2\"."
|
|
||||||
else
|
|
||||||
puts boom.message
|
|
||||||
end
|
|
||||||
# puts "I don't know this plugin: #{boom.message[/Could not load plugin (.*?): /, 1]}."
|
|
||||||
rescue CodeRay::Scanners::Scanner::ScanError # FIXME: rescue Errno::EPIPE
|
|
||||||
# this is sometimes raised by pagers; ignore [TODO: wtf?]
|
|
||||||
ensure
|
|
||||||
file.close if output_file
|
|
||||||
end
|
|
||||||
end
|
|
||||||
when 'li', 'list'
|
|
||||||
arg = args.first && args.first.downcase
|
|
||||||
if [nil, 's', 'sc', 'scanner', 'scanners'].include? arg
|
|
||||||
puts 'input languages (Scanners):'
|
|
||||||
print_list_of CodeRay::Scanners
|
|
||||||
end
|
|
||||||
|
|
||||||
if [nil, 'e', 'en', 'enc', 'encoder', 'encoders'].include? arg
|
|
||||||
puts 'output formats (Encoders):'
|
|
||||||
print_list_of CodeRay::Encoders
|
|
||||||
end
|
|
||||||
|
|
||||||
if [nil, 'st', 'style', 'styles'].include? arg
|
|
||||||
puts 'CSS themes for HTML output (Styles):'
|
|
||||||
print_list_of CodeRay::Styles
|
|
||||||
end
|
|
||||||
|
|
||||||
if [nil, 'f', 'ft', 'file', 'filetype', 'filetypes'].include? arg
|
|
||||||
puts 'recognized file types:'
|
|
||||||
|
|
||||||
filetypes = Hash.new { |h, k| h[k] = [] }
|
|
||||||
CodeRay::FileType::TypeFromExt.inject filetypes do |types, (ext, type)|
|
|
||||||
types[type.to_s] << ".#{ext}"
|
|
||||||
types
|
|
||||||
end
|
|
||||||
CodeRay::FileType::TypeFromName.inject filetypes do |types, (name, type)|
|
|
||||||
types[type.to_s] << name
|
|
||||||
types
|
|
||||||
end
|
|
||||||
|
|
||||||
filetypes.sort.each do |type, exts|
|
|
||||||
puts " #{type}: #{exts.sort_by { |ext| ext.size }.join(', ')}"
|
|
||||||
end
|
|
||||||
end
|
|
||||||
when 'stylesheet', 'style', 'css'
|
|
||||||
puts CodeRay::Encoders[:html]::CSS.new(args.first).stylesheet
|
|
||||||
when 'commands'
|
|
||||||
commands
|
|
||||||
when 'help'
|
|
||||||
help
|
|
||||||
else
|
|
||||||
$stdout = $stderr
|
|
||||||
help
|
|
||||||
puts
|
|
||||||
if subcommand[/\A\w+\z/]
|
|
||||||
puts "Unknown command: #{subcommand}"
|
|
||||||
else
|
|
||||||
puts "File not found: #{subcommand}"
|
|
||||||
end
|
|
||||||
exit 1
|
|
||||||
end
|
|
|
@ -1,278 +0,0 @@
|
||||||
# encoding: utf-8
|
|
||||||
# Encoding.default_internal = 'UTF-8'
|
|
||||||
|
|
||||||
# = CodeRay Library
|
|
||||||
#
|
|
||||||
# CodeRay is a Ruby library for syntax highlighting.
|
|
||||||
#
|
|
||||||
# I try to make CodeRay easy to use and intuitive, but at the same time fully
|
|
||||||
# featured, complete, fast and efficient.
|
|
||||||
#
|
|
||||||
# See README.
|
|
||||||
#
|
|
||||||
# It consists mainly of
|
|
||||||
# * the main engine: CodeRay (Scanners::Scanner, Tokens, Encoders::Encoder)
|
|
||||||
# * the plugin system: PluginHost, Plugin
|
|
||||||
# * the scanners in CodeRay::Scanners
|
|
||||||
# * the encoders in CodeRay::Encoders
|
|
||||||
# * the styles in CodeRay::Styles
|
|
||||||
#
|
|
||||||
# Here's a fancy graphic to light up this gray docu:
|
|
||||||
#
|
|
||||||
# http://cycnus.de/raindark/coderay/scheme.png
|
|
||||||
#
|
|
||||||
# == Documentation
|
|
||||||
#
|
|
||||||
# See CodeRay, Encoders, Scanners, Tokens.
|
|
||||||
#
|
|
||||||
# == Usage
|
|
||||||
#
|
|
||||||
# Remember you need RubyGems to use CodeRay, unless you have it in your load
|
|
||||||
# path. Run Ruby with -rubygems option if required.
|
|
||||||
#
|
|
||||||
# === Highlight Ruby code in a string as html
|
|
||||||
#
|
|
||||||
# require 'coderay'
|
|
||||||
# print CodeRay.scan('puts "Hello, world!"', :ruby).html
|
|
||||||
#
|
|
||||||
# # prints something like this:
|
|
||||||
# puts <span class="s">"Hello, world!"</span>
|
|
||||||
#
|
|
||||||
#
|
|
||||||
# === Highlight C code from a file in a html div
|
|
||||||
#
|
|
||||||
# require 'coderay'
|
|
||||||
# print CodeRay.scan(File.read('ruby.h'), :c).div
|
|
||||||
# print CodeRay.scan_file('ruby.h').html.div
|
|
||||||
#
|
|
||||||
# You can include this div in your page. The used CSS styles can be printed with
|
|
||||||
#
|
|
||||||
# % coderay_stylesheet
|
|
||||||
#
|
|
||||||
# === Highlight without typing too much
|
|
||||||
#
|
|
||||||
# If you are one of the hasty (or lazy, or extremely curious) people, just run this file:
|
|
||||||
#
|
|
||||||
# % ruby -rubygems /path/to/coderay/coderay.rb > example.html
|
|
||||||
#
|
|
||||||
# and look at the file it created in your browser.
|
|
||||||
#
|
|
||||||
# = CodeRay Module
|
|
||||||
#
|
|
||||||
# The CodeRay module provides convenience methods for the engine.
|
|
||||||
#
|
|
||||||
# * The +lang+ and +format+ arguments select Scanner and Encoder to use. These are
|
|
||||||
# simply lower-case symbols, like <tt>:python</tt> or <tt>:html</tt>.
|
|
||||||
# * All methods take an optional hash as last parameter, +options+, that is send to
|
|
||||||
# the Encoder / Scanner.
|
|
||||||
# * Input and language are always sorted in this order: +code+, +lang+.
|
|
||||||
# (This is in alphabetical order, if you need a mnemonic ;)
|
|
||||||
#
|
|
||||||
# You should be able to highlight everything you want just using these methods;
|
|
||||||
# so there is no need to dive into CodeRay's deep class hierarchy.
|
|
||||||
#
|
|
||||||
# The examples in the demo directory demonstrate common cases using this interface.
|
|
||||||
#
|
|
||||||
# = Basic Access Ways
|
|
||||||
#
|
|
||||||
# Read this to get a general view what CodeRay provides.
|
|
||||||
#
|
|
||||||
# == Scanning
|
|
||||||
#
|
|
||||||
# Scanning means analysing an input string, splitting it up into Tokens.
|
|
||||||
# Each Token knows about what type it is: string, comment, class name, etc.
|
|
||||||
#
|
|
||||||
# Each +lang+ (language) has its own Scanner; for example, <tt>:ruby</tt> code is
|
|
||||||
# handled by CodeRay::Scanners::Ruby.
|
|
||||||
#
|
|
||||||
# CodeRay.scan:: Scan a string in a given language into Tokens.
|
|
||||||
# This is the most common method to use.
|
|
||||||
# CodeRay.scan_file:: Scan a file and guess the language using FileType.
|
|
||||||
#
|
|
||||||
# The Tokens object you get from these methods can encode itself; see Tokens.
|
|
||||||
#
|
|
||||||
# == Encoding
|
|
||||||
#
|
|
||||||
# Encoding means compiling Tokens into an output. This can be colored HTML or
|
|
||||||
# LaTeX, a textual statistic or just the number of non-whitespace tokens.
|
|
||||||
#
|
|
||||||
# Each Encoder provides output in a specific +format+, so you select Encoders via
|
|
||||||
# formats like <tt>:html</tt> or <tt>:statistic</tt>.
|
|
||||||
#
|
|
||||||
# CodeRay.encode:: Scan and encode a string in a given language.
|
|
||||||
# CodeRay.encode_tokens:: Encode the given tokens.
|
|
||||||
# CodeRay.encode_file:: Scan a file, guess the language using FileType and encode it.
|
|
||||||
#
|
|
||||||
# == All-in-One Encoding
|
|
||||||
#
|
|
||||||
# CodeRay.encode:: Highlight a string with a given input and output format.
|
|
||||||
#
|
|
||||||
# == Instanciating
|
|
||||||
#
|
|
||||||
# You can use an Encoder instance to highlight multiple inputs. This way, the setup
|
|
||||||
# for this Encoder must only be done once.
|
|
||||||
#
|
|
||||||
# CodeRay.encoder:: Create an Encoder instance with format and options.
|
|
||||||
# CodeRay.scanner:: Create an Scanner instance for lang, with '' as default code.
|
|
||||||
#
|
|
||||||
# To make use of CodeRay.scanner, use CodeRay::Scanner::code=.
|
|
||||||
#
|
|
||||||
# The scanning methods provide more flexibility; we recommend to use these.
|
|
||||||
#
|
|
||||||
# == Reusing Scanners and Encoders
|
|
||||||
#
|
|
||||||
# If you want to re-use scanners and encoders (because that is faster), see
|
|
||||||
# CodeRay::Duo for the most convenient (and recommended) interface.
|
|
||||||
module CodeRay
|
|
||||||
|
|
||||||
$CODERAY_DEBUG ||= false
|
|
||||||
|
|
||||||
require 'coderay/version'
|
|
||||||
|
|
||||||
# helpers
|
|
||||||
autoload :FileType, 'coderay/helpers/file_type'
|
|
||||||
|
|
||||||
# Tokens
|
|
||||||
autoload :Tokens, 'coderay/tokens'
|
|
||||||
autoload :TokensProxy, 'coderay/tokens_proxy'
|
|
||||||
autoload :TokenKinds, 'coderay/token_kinds'
|
|
||||||
|
|
||||||
# Plugin system
|
|
||||||
autoload :PluginHost, 'coderay/helpers/plugin'
|
|
||||||
autoload :Plugin, 'coderay/helpers/plugin'
|
|
||||||
|
|
||||||
# Plugins
|
|
||||||
autoload :Scanners, 'coderay/scanner'
|
|
||||||
autoload :Encoders, 'coderay/encoder'
|
|
||||||
autoload :Styles, 'coderay/style'
|
|
||||||
|
|
||||||
# Convenience access and reusable Encoder/Scanner pair
|
|
||||||
autoload :Duo, 'coderay/duo'
|
|
||||||
|
|
||||||
class << self
|
|
||||||
|
|
||||||
# Scans the given +code+ (a String) with the Scanner for +lang+.
|
|
||||||
#
|
|
||||||
# This is a simple way to use CodeRay. Example:
|
|
||||||
# require 'coderay'
|
|
||||||
# page = CodeRay.scan("puts 'Hello, world!'", :ruby).html
|
|
||||||
#
|
|
||||||
# See also demo/demo_simple.
|
|
||||||
def scan code, lang, options = {}, &block
|
|
||||||
# FIXME: return a proxy for direct-stream encoding
|
|
||||||
TokensProxy.new code, lang, options, block
|
|
||||||
end
|
|
||||||
|
|
||||||
# Scans +filename+ (a path to a code file) with the Scanner for +lang+.
|
|
||||||
#
|
|
||||||
# If +lang+ is :auto or omitted, the CodeRay::FileType module is used to
|
|
||||||
# determine it. If it cannot find out what type it is, it uses
|
|
||||||
# CodeRay::Scanners::Text.
|
|
||||||
#
|
|
||||||
# Calls CodeRay.scan.
|
|
||||||
#
|
|
||||||
# Example:
|
|
||||||
# require 'coderay'
|
|
||||||
# page = CodeRay.scan_file('some_c_code.c').html
|
|
||||||
def scan_file filename, lang = :auto, options = {}, &block
|
|
||||||
lang = FileType.fetch filename, :text, true if lang == :auto
|
|
||||||
code = File.read filename
|
|
||||||
scan code, lang, options, &block
|
|
||||||
end
|
|
||||||
|
|
||||||
# Encode a string.
|
|
||||||
#
|
|
||||||
# This scans +code+ with the the Scanner for +lang+ and then
|
|
||||||
# encodes it with the Encoder for +format+.
|
|
||||||
# +options+ will be passed to the Encoder.
|
|
||||||
#
|
|
||||||
# See CodeRay::Encoder.encode.
|
|
||||||
def encode code, lang, format, options = {}
|
|
||||||
encoder(format, options).encode code, lang, options
|
|
||||||
end
|
|
||||||
|
|
||||||
# Encode pre-scanned Tokens.
|
|
||||||
# Use this together with CodeRay.scan:
|
|
||||||
#
|
|
||||||
# require 'coderay'
|
|
||||||
#
|
|
||||||
# # Highlight a short Ruby code example in a HTML span
|
|
||||||
# tokens = CodeRay.scan '1 + 2', :ruby
|
|
||||||
# puts CodeRay.encode_tokens(tokens, :span)
|
|
||||||
#
|
|
||||||
def encode_tokens tokens, format, options = {}
|
|
||||||
encoder(format, options).encode_tokens tokens, options
|
|
||||||
end
|
|
||||||
|
|
||||||
# Encodes +filename+ (a path to a code file) with the Scanner for +lang+.
|
|
||||||
#
|
|
||||||
# See CodeRay.scan_file.
|
|
||||||
# Notice that the second argument is the output +format+, not the input language.
|
|
||||||
#
|
|
||||||
# Example:
|
|
||||||
# require 'coderay'
|
|
||||||
# page = CodeRay.encode_file 'some_c_code.c', :html
|
|
||||||
def encode_file filename, format, options = {}
|
|
||||||
tokens = scan_file filename, :auto, get_scanner_options(options)
|
|
||||||
encode_tokens tokens, format, options
|
|
||||||
end
|
|
||||||
|
|
||||||
# Highlight a string into a HTML <div>.
|
|
||||||
#
|
|
||||||
# CSS styles use classes, so you have to include a stylesheet
|
|
||||||
# in your output.
|
|
||||||
#
|
|
||||||
# See encode.
|
|
||||||
def highlight code, lang, options = { :css => :class }, format = :div
|
|
||||||
encode code, lang, format, options
|
|
||||||
end
|
|
||||||
|
|
||||||
# Highlight a file into a HTML <div>.
|
|
||||||
#
|
|
||||||
# CSS styles use classes, so you have to include a stylesheet
|
|
||||||
# in your output.
|
|
||||||
#
|
|
||||||
# See encode.
|
|
||||||
def highlight_file filename, options = { :css => :class }, format = :div
|
|
||||||
encode_file filename, format, options
|
|
||||||
end
|
|
||||||
|
|
||||||
# Finds the Encoder class for +format+ and creates an instance, passing
|
|
||||||
# +options+ to it.
|
|
||||||
#
|
|
||||||
# Example:
|
|
||||||
# require 'coderay'
|
|
||||||
#
|
|
||||||
# stats = CodeRay.encoder(:statistic)
|
|
||||||
# stats.encode("puts 17 + 4\n", :ruby)
|
|
||||||
#
|
|
||||||
# puts '%d out of %d tokens have the kind :integer.' % [
|
|
||||||
# stats.type_stats[:integer].count,
|
|
||||||
# stats.real_token_count
|
|
||||||
# ]
|
|
||||||
# #-> 2 out of 4 tokens have the kind :integer.
|
|
||||||
def encoder format, options = {}
|
|
||||||
Encoders[format].new options
|
|
||||||
end
|
|
||||||
|
|
||||||
# Finds the Scanner class for +lang+ and creates an instance, passing
|
|
||||||
# +options+ to it.
|
|
||||||
#
|
|
||||||
# See Scanner.new.
|
|
||||||
def scanner lang, options = {}, &block
|
|
||||||
Scanners[lang].new '', options, &block
|
|
||||||
end
|
|
||||||
|
|
||||||
# Extract the options for the scanner from the +options+ hash.
|
|
||||||
#
|
|
||||||
# Returns an empty Hash if <tt>:scanner_options</tt> is not set.
|
|
||||||
#
|
|
||||||
# This is used if a method like CodeRay.encode has to provide options
|
|
||||||
# for Encoder _and_ scanner.
|
|
||||||
def get_scanner_options options
|
|
||||||
options.fetch :scanner_options, {}
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
|
@ -1,81 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
|
|
||||||
# = Duo
|
|
||||||
#
|
|
||||||
# A Duo is a convenient way to use CodeRay. You just create a Duo,
|
|
||||||
# giving it a lang (language of the input code) and a format (desired
|
|
||||||
# output format), and call Duo#highlight with the code.
|
|
||||||
#
|
|
||||||
# Duo makes it easy to re-use both scanner and encoder for a repetitive
|
|
||||||
# task. It also provides a very easy interface syntax:
|
|
||||||
#
|
|
||||||
# require 'coderay'
|
|
||||||
# CodeRay::Duo[:python, :div].highlight 'import this'
|
|
||||||
#
|
|
||||||
# Until you want to do uncommon things with CodeRay, I recommend to use
|
|
||||||
# this method, since it takes care of everything.
|
|
||||||
class Duo
|
|
||||||
|
|
||||||
attr_accessor :lang, :format, :options
|
|
||||||
|
|
||||||
# Create a new Duo, holding a lang and a format to highlight code.
|
|
||||||
#
|
|
||||||
# simple:
|
|
||||||
# CodeRay::Duo[:ruby, :html].highlight 'bla 42'
|
|
||||||
#
|
|
||||||
# with options:
|
|
||||||
# CodeRay::Duo[:ruby, :html, :hint => :debug].highlight '????::??'
|
|
||||||
#
|
|
||||||
# alternative syntax without options:
|
|
||||||
# CodeRay::Duo[:ruby => :statistic].encode 'class << self; end'
|
|
||||||
#
|
|
||||||
# alternative syntax with options:
|
|
||||||
# CodeRay::Duo[{ :ruby => :statistic }, :do => :something].encode 'abc'
|
|
||||||
#
|
|
||||||
# The options are forwarded to scanner and encoder
|
|
||||||
# (see CodeRay.get_scanner_options).
|
|
||||||
def initialize lang = nil, format = nil, options = {}
|
|
||||||
if format.nil? && lang.is_a?(Hash) && lang.size == 1
|
|
||||||
@lang = lang.keys.first
|
|
||||||
@format = lang[@lang]
|
|
||||||
else
|
|
||||||
@lang = lang
|
|
||||||
@format = format
|
|
||||||
end
|
|
||||||
@options = options
|
|
||||||
end
|
|
||||||
|
|
||||||
class << self
|
|
||||||
# To allow calls like Duo[:ruby, :html].highlight.
|
|
||||||
alias [] new
|
|
||||||
end
|
|
||||||
|
|
||||||
# The scanner of the duo. Only created once.
|
|
||||||
def scanner
|
|
||||||
@scanner ||= CodeRay.scanner @lang, CodeRay.get_scanner_options(@options)
|
|
||||||
end
|
|
||||||
|
|
||||||
# The encoder of the duo. Only created once.
|
|
||||||
def encoder
|
|
||||||
@encoder ||= CodeRay.encoder @format, @options
|
|
||||||
end
|
|
||||||
|
|
||||||
# Tokenize and highlight the code using +scanner+ and +encoder+.
|
|
||||||
def encode code, options = {}
|
|
||||||
options = @options.merge options
|
|
||||||
encoder.encode(code, @lang, options)
|
|
||||||
end
|
|
||||||
alias highlight encode
|
|
||||||
|
|
||||||
# Allows to use Duo like a proc object:
|
|
||||||
#
|
|
||||||
# CodeRay::Duo[:python => :yaml].call(code)
|
|
||||||
#
|
|
||||||
# or, in Ruby 1.9 and later:
|
|
||||||
#
|
|
||||||
# CodeRay::Duo[:python => :yaml].(code)
|
|
||||||
alias call encode
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
|
@ -1,201 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
|
|
||||||
# This module holds the Encoder class and its subclasses.
|
|
||||||
# For example, the HTML encoder is named CodeRay::Encoders::HTML
|
|
||||||
# can be found in coderay/encoders/html.
|
|
||||||
#
|
|
||||||
# Encoders also provides methods and constants for the register
|
|
||||||
# mechanism and the [] method that returns the Encoder class
|
|
||||||
# belonging to the given format.
|
|
||||||
module Encoders
|
|
||||||
|
|
||||||
extend PluginHost
|
|
||||||
plugin_path File.dirname(__FILE__), 'encoders'
|
|
||||||
|
|
||||||
# = Encoder
|
|
||||||
#
|
|
||||||
# The Encoder base class. Together with Scanner and
|
|
||||||
# Tokens, it forms the highlighting triad.
|
|
||||||
#
|
|
||||||
# Encoder instances take a Tokens object and do something with it.
|
|
||||||
#
|
|
||||||
# The most common Encoder is surely the HTML encoder
|
|
||||||
# (CodeRay::Encoders::HTML). It highlights the code in a colorful
|
|
||||||
# html page.
|
|
||||||
# If you want the highlighted code in a div or a span instead,
|
|
||||||
# use its subclasses Div and Span.
|
|
||||||
class Encoder
|
|
||||||
extend Plugin
|
|
||||||
plugin_host Encoders
|
|
||||||
|
|
||||||
class << self
|
|
||||||
|
|
||||||
# If FILE_EXTENSION isn't defined, this method returns the
|
|
||||||
# downcase class name instead.
|
|
||||||
def const_missing sym
|
|
||||||
if sym == :FILE_EXTENSION
|
|
||||||
(defined?(@plugin_id) && @plugin_id || name[/\w+$/].downcase).to_s
|
|
||||||
else
|
|
||||||
super
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# The default file extension for output file of this encoder class.
|
|
||||||
def file_extension
|
|
||||||
self::FILE_EXTENSION
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
# Subclasses are to store their default options in this constant.
|
|
||||||
DEFAULT_OPTIONS = { }
|
|
||||||
|
|
||||||
# The options you gave the Encoder at creating.
|
|
||||||
attr_accessor :options, :scanner
|
|
||||||
|
|
||||||
# Creates a new Encoder.
|
|
||||||
# +options+ is saved and used for all encode operations, as long
|
|
||||||
# as you don't overwrite it there by passing additional options.
|
|
||||||
#
|
|
||||||
# Encoder objects provide three encode methods:
|
|
||||||
# - encode simply takes a +code+ string and a +lang+
|
|
||||||
# - encode_tokens expects a +tokens+ object instead
|
|
||||||
#
|
|
||||||
# Each method has an optional +options+ parameter. These are
|
|
||||||
# added to the options you passed at creation.
|
|
||||||
def initialize options = {}
|
|
||||||
@options = self.class::DEFAULT_OPTIONS.merge options
|
|
||||||
@@CODERAY_TOKEN_INTERFACE_DEPRECATION_WARNING_GIVEN = false
|
|
||||||
end
|
|
||||||
|
|
||||||
# Encode a Tokens object.
|
|
||||||
def encode_tokens tokens, options = {}
|
|
||||||
options = @options.merge options
|
|
||||||
@scanner = tokens.scanner if tokens.respond_to? :scanner
|
|
||||||
setup options
|
|
||||||
compile tokens, options
|
|
||||||
finish options
|
|
||||||
end
|
|
||||||
|
|
||||||
# Encode the given +code+ using the Scanner for +lang+.
|
|
||||||
def encode code, lang, options = {}
|
|
||||||
options = @options.merge options
|
|
||||||
@scanner = Scanners[lang].new code, CodeRay.get_scanner_options(options).update(:tokens => self)
|
|
||||||
setup options
|
|
||||||
@scanner.tokenize
|
|
||||||
finish options
|
|
||||||
end
|
|
||||||
|
|
||||||
# You can use highlight instead of encode, if that seems
|
|
||||||
# more clear to you.
|
|
||||||
alias highlight encode
|
|
||||||
|
|
||||||
# The default file extension for this encoder.
|
|
||||||
def file_extension
|
|
||||||
self.class.file_extension
|
|
||||||
end
|
|
||||||
|
|
||||||
def << token
|
|
||||||
unless @@CODERAY_TOKEN_INTERFACE_DEPRECATION_WARNING_GIVEN
|
|
||||||
warn 'Using old Tokens#<< interface.'
|
|
||||||
@@CODERAY_TOKEN_INTERFACE_DEPRECATION_WARNING_GIVEN = true
|
|
||||||
end
|
|
||||||
self.token(*token)
|
|
||||||
end
|
|
||||||
|
|
||||||
# Called with +content+ and +kind+ of the currently scanned token.
|
|
||||||
# For simple scanners, it's enougth to implement this method.
|
|
||||||
#
|
|
||||||
# By default, it calls text_token, begin_group, end_group, begin_line,
|
|
||||||
# or end_line, depending on the +content+.
|
|
||||||
def token content, kind
|
|
||||||
case content
|
|
||||||
when String
|
|
||||||
text_token content, kind
|
|
||||||
when :begin_group
|
|
||||||
begin_group kind
|
|
||||||
when :end_group
|
|
||||||
end_group kind
|
|
||||||
when :begin_line
|
|
||||||
begin_line kind
|
|
||||||
when :end_line
|
|
||||||
end_line kind
|
|
||||||
else
|
|
||||||
raise ArgumentError, 'Unknown token content type: %p, kind = %p' % [content, kind]
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Called for each text token ([text, kind]), where text is a String.
|
|
||||||
def text_token text, kind
|
|
||||||
@out << text
|
|
||||||
end
|
|
||||||
|
|
||||||
# Starts a token group with the given +kind+.
|
|
||||||
def begin_group kind
|
|
||||||
end
|
|
||||||
|
|
||||||
# Ends a token group with the given +kind+.
|
|
||||||
def end_group kind
|
|
||||||
end
|
|
||||||
|
|
||||||
# Starts a new line token group with the given +kind+.
|
|
||||||
def begin_line kind
|
|
||||||
end
|
|
||||||
|
|
||||||
# Ends a new line token group with the given +kind+.
|
|
||||||
def end_line kind
|
|
||||||
end
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
# Called with merged options before encoding starts.
|
|
||||||
# Sets @out to an empty string.
|
|
||||||
#
|
|
||||||
# See the HTML Encoder for an example of option caching.
|
|
||||||
def setup options
|
|
||||||
@out = get_output(options)
|
|
||||||
end
|
|
||||||
|
|
||||||
def get_output options
|
|
||||||
options[:out] || ''
|
|
||||||
end
|
|
||||||
|
|
||||||
# Append data.to_s to the output. Returns the argument.
|
|
||||||
def output data
|
|
||||||
@out << data.to_s
|
|
||||||
data
|
|
||||||
end
|
|
||||||
|
|
||||||
# Called with merged options after encoding starts.
|
|
||||||
# The return value is the result of encoding, typically @out.
|
|
||||||
def finish options
|
|
||||||
@out
|
|
||||||
end
|
|
||||||
|
|
||||||
# Do the encoding.
|
|
||||||
#
|
|
||||||
# The already created +tokens+ object must be used; it must be a
|
|
||||||
# Tokens object.
|
|
||||||
def compile tokens, options = {}
|
|
||||||
content = nil
|
|
||||||
for item in tokens
|
|
||||||
if item.is_a? Array
|
|
||||||
raise ArgumentError, 'Two-element array tokens are no longer supported.'
|
|
||||||
end
|
|
||||||
if content
|
|
||||||
token content, item
|
|
||||||
content = nil
|
|
||||||
else
|
|
||||||
content = item
|
|
||||||
end
|
|
||||||
end
|
|
||||||
raise 'odd number list for Tokens' if content
|
|
||||||
end
|
|
||||||
|
|
||||||
alias tokens compile
|
|
||||||
public :tokens
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,17 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Encoders
|
|
||||||
|
|
||||||
map \
|
|
||||||
:loc => :lines_of_code,
|
|
||||||
:plain => :text,
|
|
||||||
:plaintext => :text,
|
|
||||||
:remove_comments => :comment_filter,
|
|
||||||
:stats => :statistic,
|
|
||||||
:term => :terminal,
|
|
||||||
:tty => :terminal,
|
|
||||||
:yml => :yaml
|
|
||||||
|
|
||||||
# No default because Tokens#nonsense should raise NoMethodError.
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,25 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Encoders
|
|
||||||
|
|
||||||
load :token_kind_filter
|
|
||||||
|
|
||||||
# A simple Filter that removes all tokens of the :comment kind.
|
|
||||||
#
|
|
||||||
# Alias: +remove_comments+
|
|
||||||
#
|
|
||||||
# Usage:
|
|
||||||
# CodeRay.scan('print # foo', :ruby).comment_filter.text
|
|
||||||
# #-> "print "
|
|
||||||
#
|
|
||||||
# See also: TokenKindFilter, LinesOfCode
|
|
||||||
class CommentFilter < TokenKindFilter
|
|
||||||
|
|
||||||
register_for :comment_filter
|
|
||||||
|
|
||||||
DEFAULT_OPTIONS = superclass::DEFAULT_OPTIONS.merge \
|
|
||||||
:exclude => [:comment, :docstring]
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,39 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Encoders
|
|
||||||
|
|
||||||
# Returns the number of tokens.
|
|
||||||
#
|
|
||||||
# Text and block tokens are counted.
|
|
||||||
class Count < Encoder
|
|
||||||
|
|
||||||
register_for :count
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def setup options
|
|
||||||
super
|
|
||||||
|
|
||||||
@count = 0
|
|
||||||
end
|
|
||||||
|
|
||||||
def finish options
|
|
||||||
output @count
|
|
||||||
end
|
|
||||||
|
|
||||||
public
|
|
||||||
|
|
||||||
def text_token text, kind
|
|
||||||
@count += 1
|
|
||||||
end
|
|
||||||
|
|
||||||
def begin_group kind
|
|
||||||
@count += 1
|
|
||||||
end
|
|
||||||
alias end_group begin_group
|
|
||||||
alias begin_line begin_group
|
|
||||||
alias end_line begin_group
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,61 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Encoders
|
|
||||||
|
|
||||||
# = Debug Encoder
|
|
||||||
#
|
|
||||||
# Fast encoder producing simple debug output.
|
|
||||||
#
|
|
||||||
# It is readable and diff-able and is used for testing.
|
|
||||||
#
|
|
||||||
# You cannot fully restore the tokens information from the
|
|
||||||
# output, because consecutive :space tokens are merged.
|
|
||||||
# Use Tokens#dump for caching purposes.
|
|
||||||
#
|
|
||||||
# See also: Scanners::Debug
|
|
||||||
class Debug < Encoder
|
|
||||||
|
|
||||||
register_for :debug
|
|
||||||
|
|
||||||
FILE_EXTENSION = 'raydebug'
|
|
||||||
|
|
||||||
def initialize options = {}
|
|
||||||
super
|
|
||||||
@opened = []
|
|
||||||
end
|
|
||||||
|
|
||||||
def text_token text, kind
|
|
||||||
if kind == :space
|
|
||||||
@out << text
|
|
||||||
else
|
|
||||||
# TODO: Escape (
|
|
||||||
text = text.gsub(/[)\\]/, '\\\\\0') # escape ) and \
|
|
||||||
@out << kind.to_s << '(' << text << ')'
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
def begin_group kind
|
|
||||||
@opened << kind
|
|
||||||
@out << kind.to_s << '<'
|
|
||||||
end
|
|
||||||
|
|
||||||
def end_group kind
|
|
||||||
if @opened.last != kind
|
|
||||||
puts @out
|
|
||||||
raise "we are inside #{@opened.inspect}, not #{kind}"
|
|
||||||
end
|
|
||||||
@opened.pop
|
|
||||||
@out << '>'
|
|
||||||
end
|
|
||||||
|
|
||||||
def begin_line kind
|
|
||||||
@out << kind.to_s << '['
|
|
||||||
end
|
|
||||||
|
|
||||||
def end_line kind
|
|
||||||
@out << ']'
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,23 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Encoders
|
|
||||||
|
|
||||||
load :html
|
|
||||||
|
|
||||||
# Wraps HTML output into a DIV element, using inline styles by default.
|
|
||||||
#
|
|
||||||
# See Encoders::HTML for available options.
|
|
||||||
class Div < HTML
|
|
||||||
|
|
||||||
FILE_EXTENSION = 'div.html'
|
|
||||||
|
|
||||||
register_for :div
|
|
||||||
|
|
||||||
DEFAULT_OPTIONS = HTML::DEFAULT_OPTIONS.merge \
|
|
||||||
:css => :style,
|
|
||||||
:wrap => :div,
|
|
||||||
:line_numbers => false
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,58 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Encoders
|
|
||||||
|
|
||||||
# A Filter encoder has another Tokens instance as output.
|
|
||||||
# It can be subclass to select, remove, or modify tokens in the stream.
|
|
||||||
#
|
|
||||||
# Subclasses of Filter are called "Filters" and can be chained.
|
|
||||||
#
|
|
||||||
# == Options
|
|
||||||
#
|
|
||||||
# === :tokens
|
|
||||||
#
|
|
||||||
# The Tokens object which will receive the output.
|
|
||||||
#
|
|
||||||
# Default: Tokens.new
|
|
||||||
#
|
|
||||||
# See also: TokenKindFilter
|
|
||||||
class Filter < Encoder
|
|
||||||
|
|
||||||
register_for :filter
|
|
||||||
|
|
||||||
protected
|
|
||||||
def setup options
|
|
||||||
super
|
|
||||||
|
|
||||||
@tokens = options[:tokens] || Tokens.new
|
|
||||||
end
|
|
||||||
|
|
||||||
def finish options
|
|
||||||
output @tokens
|
|
||||||
end
|
|
||||||
|
|
||||||
public
|
|
||||||
|
|
||||||
def text_token text, kind # :nodoc:
|
|
||||||
@tokens.text_token text, kind
|
|
||||||
end
|
|
||||||
|
|
||||||
def begin_group kind # :nodoc:
|
|
||||||
@tokens.begin_group kind
|
|
||||||
end
|
|
||||||
|
|
||||||
def begin_line kind # :nodoc:
|
|
||||||
@tokens.begin_line kind
|
|
||||||
end
|
|
||||||
|
|
||||||
def end_group kind # :nodoc:
|
|
||||||
@tokens.end_group kind
|
|
||||||
end
|
|
||||||
|
|
||||||
def end_line kind # :nodoc:
|
|
||||||
@tokens.end_line kind
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,302 +0,0 @@
|
||||||
require 'set'
|
|
||||||
|
|
||||||
module CodeRay
|
|
||||||
module Encoders
|
|
||||||
|
|
||||||
# = HTML Encoder
|
|
||||||
#
|
|
||||||
# This is CodeRay's most important highlighter:
|
|
||||||
# It provides save, fast XHTML generation and CSS support.
|
|
||||||
#
|
|
||||||
# == Usage
|
|
||||||
#
|
|
||||||
# require 'coderay'
|
|
||||||
# puts CodeRay.scan('Some /code/', :ruby).html #-> a HTML page
|
|
||||||
# puts CodeRay.scan('Some /code/', :ruby).html(:wrap => :span)
|
|
||||||
# #-> <span class="CodeRay"><span class="co">Some</span> /code/</span>
|
|
||||||
# puts CodeRay.scan('Some /code/', :ruby).span #-> the same
|
|
||||||
#
|
|
||||||
# puts CodeRay.scan('Some code', :ruby).html(
|
|
||||||
# :wrap => nil,
|
|
||||||
# :line_numbers => :inline,
|
|
||||||
# :css => :style
|
|
||||||
# )
|
|
||||||
#
|
|
||||||
# == Options
|
|
||||||
#
|
|
||||||
# === :tab_width
|
|
||||||
# Convert \t characters to +n+ spaces (a number.)
|
|
||||||
#
|
|
||||||
# Default: 8
|
|
||||||
#
|
|
||||||
# === :css
|
|
||||||
# How to include the styles; can be :class or :style.
|
|
||||||
#
|
|
||||||
# Default: :class
|
|
||||||
#
|
|
||||||
# === :wrap
|
|
||||||
# Wrap in :page, :div, :span or nil.
|
|
||||||
#
|
|
||||||
# You can also use Encoders::Div and Encoders::Span.
|
|
||||||
#
|
|
||||||
# Default: nil
|
|
||||||
#
|
|
||||||
# === :title
|
|
||||||
#
|
|
||||||
# The title of the HTML page (works only when :wrap is set to :page.)
|
|
||||||
#
|
|
||||||
# Default: 'CodeRay output'
|
|
||||||
#
|
|
||||||
# === :line_numbers
|
|
||||||
# Include line numbers in :table, :inline, or nil (no line numbers)
|
|
||||||
#
|
|
||||||
# Default: nil
|
|
||||||
#
|
|
||||||
# === :line_number_anchors
|
|
||||||
# Adds anchors and links to the line numbers. Can be false (off), true (on),
|
|
||||||
# or a prefix string that will be prepended to the anchor name.
|
|
||||||
#
|
|
||||||
# The prefix must consist only of letters, digits, and underscores.
|
|
||||||
#
|
|
||||||
# Default: true, default prefix name: "line"
|
|
||||||
#
|
|
||||||
# === :line_number_start
|
|
||||||
# Where to start with line number counting.
|
|
||||||
#
|
|
||||||
# Default: 1
|
|
||||||
#
|
|
||||||
# === :bold_every
|
|
||||||
# Make every +n+-th number appear bold.
|
|
||||||
#
|
|
||||||
# Default: 10
|
|
||||||
#
|
|
||||||
# === :highlight_lines
|
|
||||||
#
|
|
||||||
# Highlights certain line numbers.
|
|
||||||
# Can be any Enumerable, typically just an Array or Range, of numbers.
|
|
||||||
#
|
|
||||||
# Bolding is deactivated when :highlight_lines is set. It only makes sense
|
|
||||||
# in combination with :line_numbers.
|
|
||||||
#
|
|
||||||
# Default: nil
|
|
||||||
#
|
|
||||||
# === :hint
|
|
||||||
# Include some information into the output using the title attribute.
|
|
||||||
# Can be :info (show token kind on mouse-over), :info_long (with full path)
|
|
||||||
# or :debug (via inspect).
|
|
||||||
#
|
|
||||||
# Default: false
|
|
||||||
class HTML < Encoder
|
|
||||||
|
|
||||||
register_for :html
|
|
||||||
|
|
||||||
FILE_EXTENSION = 'snippet.html'
|
|
||||||
|
|
||||||
DEFAULT_OPTIONS = {
|
|
||||||
:tab_width => 8,
|
|
||||||
|
|
||||||
:css => :class,
|
|
||||||
:style => :alpha,
|
|
||||||
:wrap => nil,
|
|
||||||
:title => 'CodeRay output',
|
|
||||||
|
|
||||||
:line_numbers => nil,
|
|
||||||
:line_number_anchors => 'n',
|
|
||||||
:line_number_start => 1,
|
|
||||||
:bold_every => 10,
|
|
||||||
:highlight_lines => nil,
|
|
||||||
|
|
||||||
:hint => false,
|
|
||||||
}
|
|
||||||
|
|
||||||
autoload :Output, 'coderay/encoders/html/output'
|
|
||||||
autoload :CSS, 'coderay/encoders/html/css'
|
|
||||||
autoload :Numbering, 'coderay/encoders/html/numbering'
|
|
||||||
|
|
||||||
attr_reader :css
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
HTML_ESCAPE = { #:nodoc:
|
|
||||||
'&' => '&',
|
|
||||||
'"' => '"',
|
|
||||||
'>' => '>',
|
|
||||||
'<' => '<',
|
|
||||||
}
|
|
||||||
|
|
||||||
# This was to prevent illegal HTML.
|
|
||||||
# Strange chars should still be avoided in codes.
|
|
||||||
evil_chars = Array(0x00...0x20) - [?\n, ?\t, ?\s]
|
|
||||||
evil_chars.each { |i| HTML_ESCAPE[i.chr] = ' ' }
|
|
||||||
#ansi_chars = Array(0x7f..0xff)
|
|
||||||
#ansi_chars.each { |i| HTML_ESCAPE[i.chr] = '&#%d;' % i }
|
|
||||||
# \x9 (\t) and \xA (\n) not included
|
|
||||||
#HTML_ESCAPE_PATTERN = /[\t&"><\0-\x8\xB-\x1f\x7f-\xff]/
|
|
||||||
HTML_ESCAPE_PATTERN = /[\t"&><\0-\x8\xB-\x1f]/
|
|
||||||
|
|
||||||
TOKEN_KIND_TO_INFO = Hash.new do |h, kind|
|
|
||||||
h[kind] = kind.to_s.gsub(/_/, ' ').gsub(/\b\w/) { $&.capitalize }
|
|
||||||
end
|
|
||||||
|
|
||||||
TRANSPARENT_TOKEN_KINDS = Set[
|
|
||||||
:delimiter, :modifier, :content, :escape, :inline_delimiter,
|
|
||||||
]
|
|
||||||
|
|
||||||
# Generate a hint about the given +kinds+ in a +hint+ style.
|
|
||||||
#
|
|
||||||
# +hint+ may be :info, :info_long or :debug.
|
|
||||||
def self.token_path_to_hint hint, kinds
|
|
||||||
kinds = Array kinds
|
|
||||||
title =
|
|
||||||
case hint
|
|
||||||
when :info
|
|
||||||
kinds = kinds[1..-1] if TRANSPARENT_TOKEN_KINDS.include? kinds.first
|
|
||||||
TOKEN_KIND_TO_INFO[kinds.first]
|
|
||||||
when :info_long
|
|
||||||
kinds.reverse.map { |kind| TOKEN_KIND_TO_INFO[kind] }.join('/')
|
|
||||||
when :debug
|
|
||||||
kinds.inspect
|
|
||||||
end
|
|
||||||
title ? " title=\"#{title}\"" : ''
|
|
||||||
end
|
|
||||||
|
|
||||||
def setup options
|
|
||||||
super
|
|
||||||
|
|
||||||
if options[:wrap] || options[:line_numbers]
|
|
||||||
@real_out = @out
|
|
||||||
@out = ''
|
|
||||||
end
|
|
||||||
|
|
||||||
@HTML_ESCAPE = HTML_ESCAPE.dup
|
|
||||||
@HTML_ESCAPE["\t"] = ' ' * options[:tab_width]
|
|
||||||
|
|
||||||
@opened = []
|
|
||||||
@last_opened = nil
|
|
||||||
@css = CSS.new options[:style]
|
|
||||||
|
|
||||||
hint = options[:hint]
|
|
||||||
if hint && ![:debug, :info, :info_long].include?(hint)
|
|
||||||
raise ArgumentError, "Unknown value %p for :hint; \
|
|
||||||
expected :info, :info_long, :debug, false, or nil." % hint
|
|
||||||
end
|
|
||||||
|
|
||||||
css_classes = TokenKinds
|
|
||||||
case options[:css]
|
|
||||||
when :class
|
|
||||||
@span_for_kind = Hash.new do |h, k|
|
|
||||||
if k.is_a? ::Symbol
|
|
||||||
kind = k_dup = k
|
|
||||||
else
|
|
||||||
kind = k.first
|
|
||||||
k_dup = k.dup
|
|
||||||
end
|
|
||||||
if kind != :space && (hint || css_class = css_classes[kind])
|
|
||||||
title = HTML.token_path_to_hint hint, k if hint
|
|
||||||
css_class ||= css_classes[kind]
|
|
||||||
h[k_dup] = "<span#{title}#{" class=\"#{css_class}\"" if css_class}>"
|
|
||||||
else
|
|
||||||
h[k_dup] = nil
|
|
||||||
end
|
|
||||||
end
|
|
||||||
when :style
|
|
||||||
@span_for_kind = Hash.new do |h, k|
|
|
||||||
kind = k.is_a?(Symbol) ? k : k.first
|
|
||||||
h[k.is_a?(Symbol) ? k : k.dup] =
|
|
||||||
if kind != :space && (hint || css_classes[kind])
|
|
||||||
title = HTML.token_path_to_hint hint, k if hint
|
|
||||||
style = @css.get_style Array(k).map { |c| css_classes[c] }
|
|
||||||
"<span#{title}#{" style=\"#{style}\"" if style}>"
|
|
||||||
end
|
|
||||||
end
|
|
||||||
else
|
|
||||||
raise ArgumentError, "Unknown value %p for :css." % options[:css]
|
|
||||||
end
|
|
||||||
|
|
||||||
@set_last_opened = options[:hint] || options[:css] == :style
|
|
||||||
end
|
|
||||||
|
|
||||||
def finish options
|
|
||||||
unless @opened.empty?
|
|
||||||
warn '%d tokens still open: %p' % [@opened.size, @opened] if $CODERAY_DEBUG
|
|
||||||
@out << '</span>' while @opened.pop
|
|
||||||
@last_opened = nil
|
|
||||||
end
|
|
||||||
|
|
||||||
@out.extend Output
|
|
||||||
@out.css = @css
|
|
||||||
if options[:line_numbers]
|
|
||||||
Numbering.number! @out, options[:line_numbers], options
|
|
||||||
end
|
|
||||||
@out.wrap! options[:wrap]
|
|
||||||
@out.apply_title! options[:title]
|
|
||||||
|
|
||||||
if defined?(@real_out) && @real_out
|
|
||||||
@real_out << @out
|
|
||||||
@out = @real_out
|
|
||||||
end
|
|
||||||
|
|
||||||
super
|
|
||||||
end
|
|
||||||
|
|
||||||
public
|
|
||||||
|
|
||||||
def text_token text, kind
|
|
||||||
if text =~ /#{HTML_ESCAPE_PATTERN}/o
|
|
||||||
text = text.gsub(/#{HTML_ESCAPE_PATTERN}/o) { |m| @HTML_ESCAPE[m] }
|
|
||||||
end
|
|
||||||
if style = @span_for_kind[@last_opened ? [kind, *@opened] : kind]
|
|
||||||
@out << style << text << '</span>'
|
|
||||||
else
|
|
||||||
@out << text
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# token groups, eg. strings
|
|
||||||
def begin_group kind
|
|
||||||
@out << (@span_for_kind[@last_opened ? [kind, *@opened] : kind] || '<span>')
|
|
||||||
@opened << kind
|
|
||||||
@last_opened = kind if @set_last_opened
|
|
||||||
end
|
|
||||||
|
|
||||||
def end_group kind
|
|
||||||
if $CODERAY_DEBUG && (@opened.empty? || @opened.last != kind)
|
|
||||||
warn 'Malformed token stream: Trying to close a token (%p) ' \
|
|
||||||
'that is not open. Open are: %p.' % [kind, @opened[1..-1]]
|
|
||||||
end
|
|
||||||
if @opened.pop
|
|
||||||
@out << '</span>'
|
|
||||||
@last_opened = @opened.last if @last_opened
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# whole lines to be highlighted, eg. a deleted line in a diff
|
|
||||||
def begin_line kind
|
|
||||||
if style = @span_for_kind[@last_opened ? [kind, *@opened] : kind]
|
|
||||||
if style['class="']
|
|
||||||
@out << style.sub('class="', 'class="line ')
|
|
||||||
else
|
|
||||||
@out << style.sub('>', ' class="line">')
|
|
||||||
end
|
|
||||||
else
|
|
||||||
@out << '<span class="line">'
|
|
||||||
end
|
|
||||||
@opened << kind
|
|
||||||
@last_opened = kind if @options[:css] == :style
|
|
||||||
end
|
|
||||||
|
|
||||||
def end_line kind
|
|
||||||
if $CODERAY_DEBUG && (@opened.empty? || @opened.last != kind)
|
|
||||||
warn 'Malformed token stream: Trying to close a line (%p) ' \
|
|
||||||
'that is not open. Open are: %p.' % [kind, @opened[1..-1]]
|
|
||||||
end
|
|
||||||
if @opened.pop
|
|
||||||
@out << '</span>'
|
|
||||||
@last_opened = @opened.last if @last_opened
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,65 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Encoders
|
|
||||||
|
|
||||||
class HTML
|
|
||||||
class CSS # :nodoc:
|
|
||||||
|
|
||||||
attr :stylesheet
|
|
||||||
|
|
||||||
def CSS.load_stylesheet style = nil
|
|
||||||
CodeRay::Styles[style]
|
|
||||||
end
|
|
||||||
|
|
||||||
def initialize style = :default
|
|
||||||
@classes = Hash.new
|
|
||||||
style = CSS.load_stylesheet style
|
|
||||||
@stylesheet = [
|
|
||||||
style::CSS_MAIN_STYLES,
|
|
||||||
style::TOKEN_COLORS.gsub(/^(?!$)/, '.CodeRay ')
|
|
||||||
].join("\n")
|
|
||||||
parse style::TOKEN_COLORS
|
|
||||||
end
|
|
||||||
|
|
||||||
def get_style styles
|
|
||||||
cl = @classes[styles.first]
|
|
||||||
return '' unless cl
|
|
||||||
style = ''
|
|
||||||
1.upto styles.size do |offset|
|
|
||||||
break if style = cl[styles[offset .. -1]]
|
|
||||||
end
|
|
||||||
# warn 'Style not found: %p' % [styles] if style.empty?
|
|
||||||
return style
|
|
||||||
end
|
|
||||||
|
|
||||||
private
|
|
||||||
|
|
||||||
CSS_CLASS_PATTERN = /
|
|
||||||
( # $1 = selectors
|
|
||||||
(?:
|
|
||||||
(?: \s* \. [-\w]+ )+
|
|
||||||
\s* ,?
|
|
||||||
)+
|
|
||||||
)
|
|
||||||
\s* \{ \s*
|
|
||||||
( [^\}]+ )? # $2 = style
|
|
||||||
\s* \} \s*
|
|
||||||
|
|
|
||||||
( [^\n]+ ) # $3 = error
|
|
||||||
/mx
|
|
||||||
def parse stylesheet
|
|
||||||
stylesheet.scan CSS_CLASS_PATTERN do |selectors, style, error|
|
|
||||||
raise "CSS parse error: '#{error.inspect}' not recognized" if error
|
|
||||||
for selector in selectors.split(',')
|
|
||||||
classes = selector.scan(/[-\w]+/)
|
|
||||||
cl = classes.pop
|
|
||||||
@classes[cl] ||= Hash.new
|
|
||||||
@classes[cl][classes] = style.to_s.strip.delete(' ').chomp(';')
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,115 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Encoders
|
|
||||||
|
|
||||||
class HTML
|
|
||||||
|
|
||||||
module Numbering # :nodoc:
|
|
||||||
|
|
||||||
def self.number! output, mode = :table, options = {}
|
|
||||||
return self unless mode
|
|
||||||
|
|
||||||
options = DEFAULT_OPTIONS.merge options
|
|
||||||
|
|
||||||
start = options[:line_number_start]
|
|
||||||
unless start.is_a? Integer
|
|
||||||
raise ArgumentError, "Invalid value %p for :line_number_start; Integer expected." % start
|
|
||||||
end
|
|
||||||
|
|
||||||
anchor_prefix = options[:line_number_anchors]
|
|
||||||
anchor_prefix = 'line' if anchor_prefix == true
|
|
||||||
anchor_prefix = anchor_prefix.to_s[/\w+/] if anchor_prefix
|
|
||||||
anchoring =
|
|
||||||
if anchor_prefix
|
|
||||||
proc do |line|
|
|
||||||
line = line.to_s
|
|
||||||
anchor = anchor_prefix + line
|
|
||||||
"<a href=\"##{anchor}\" name=\"#{anchor}\">#{line}</a>"
|
|
||||||
end
|
|
||||||
else
|
|
||||||
proc { |line| line.to_s } # :to_s.to_proc in Ruby 1.8.7+
|
|
||||||
end
|
|
||||||
|
|
||||||
bold_every = options[:bold_every]
|
|
||||||
highlight_lines = options[:highlight_lines]
|
|
||||||
bolding =
|
|
||||||
if bold_every == false && highlight_lines == nil
|
|
||||||
anchoring
|
|
||||||
elsif highlight_lines.is_a? Enumerable
|
|
||||||
highlight_lines = highlight_lines.to_set
|
|
||||||
proc do |line|
|
|
||||||
if highlight_lines.include? line
|
|
||||||
"<strong class=\"highlighted\">#{anchoring[line]}</strong>" # highlighted line numbers in bold
|
|
||||||
else
|
|
||||||
anchoring[line]
|
|
||||||
end
|
|
||||||
end
|
|
||||||
elsif bold_every.is_a? Integer
|
|
||||||
raise ArgumentError, ":bolding can't be 0." if bold_every == 0
|
|
||||||
proc do |line|
|
|
||||||
if line % bold_every == 0
|
|
||||||
"<strong>#{anchoring[line]}</strong>" # every bold_every-th number in bold
|
|
||||||
else
|
|
||||||
anchoring[line]
|
|
||||||
end
|
|
||||||
end
|
|
||||||
else
|
|
||||||
raise ArgumentError, 'Invalid value %p for :bolding; false or Integer expected.' % bold_every
|
|
||||||
end
|
|
||||||
|
|
||||||
line_count = output.count("\n")
|
|
||||||
position_of_last_newline = output.rindex(RUBY_VERSION >= '1.9' ? /\n/ : ?\n)
|
|
||||||
if position_of_last_newline
|
|
||||||
after_last_newline = output[position_of_last_newline + 1 .. -1]
|
|
||||||
ends_with_newline = after_last_newline[/\A(?:<\/span>)*\z/]
|
|
||||||
line_count += 1 if not ends_with_newline
|
|
||||||
end
|
|
||||||
|
|
||||||
case mode
|
|
||||||
when :inline
|
|
||||||
max_width = (start + line_count).to_s.size
|
|
||||||
line_number = start
|
|
||||||
nesting = []
|
|
||||||
output.gsub!(/^.*$\n?/) do |line|
|
|
||||||
line.chomp!
|
|
||||||
open = nesting.join
|
|
||||||
line.scan(%r!<(/)?span[^>]*>?!) do |close,|
|
|
||||||
if close
|
|
||||||
nesting.pop
|
|
||||||
else
|
|
||||||
nesting << $&
|
|
||||||
end
|
|
||||||
end
|
|
||||||
close = '</span>' * nesting.size
|
|
||||||
|
|
||||||
line_number_text = bolding.call line_number
|
|
||||||
indent = ' ' * (max_width - line_number.to_s.size) # TODO: Optimize (10^x)
|
|
||||||
line_number += 1
|
|
||||||
"<span class=\"line-numbers\">#{indent}#{line_number_text}</span>#{open}#{line}#{close}\n"
|
|
||||||
end
|
|
||||||
|
|
||||||
when :table
|
|
||||||
line_numbers = (start ... start + line_count).map(&bolding).join("\n")
|
|
||||||
line_numbers << "\n"
|
|
||||||
line_numbers_table_template = Output::TABLE.apply('LINE_NUMBERS', line_numbers)
|
|
||||||
|
|
||||||
output.gsub!(/<\/div>\n/, '</div>')
|
|
||||||
output.wrap_in! line_numbers_table_template
|
|
||||||
output.wrapped_in = :div
|
|
||||||
|
|
||||||
when :list
|
|
||||||
raise NotImplementedError, 'The :list option is no longer available. Use :table.'
|
|
||||||
|
|
||||||
else
|
|
||||||
raise ArgumentError, 'Unknown value %p for mode: expected one of %p' %
|
|
||||||
[mode, [:table, :inline]]
|
|
||||||
end
|
|
||||||
|
|
||||||
output
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,158 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Encoders
|
|
||||||
|
|
||||||
class HTML
|
|
||||||
|
|
||||||
# This module is included in the output String of the HTML Encoder.
|
|
||||||
#
|
|
||||||
# It provides methods like wrap, div, page etc.
|
|
||||||
#
|
|
||||||
# Remember to use #clone instead of #dup to keep the modules the object was
|
|
||||||
# extended with.
|
|
||||||
#
|
|
||||||
# TODO: Rewrite this without monkey patching.
|
|
||||||
module Output
|
|
||||||
|
|
||||||
attr_accessor :css
|
|
||||||
|
|
||||||
class << self
|
|
||||||
|
|
||||||
# Raises an exception if an object that doesn't respond to to_str is extended by Output,
|
|
||||||
# to prevent users from misuse. Use Module#remove_method to disable.
|
|
||||||
def extended o # :nodoc:
|
|
||||||
warn "The Output module is intended to extend instances of String, not #{o.class}." unless o.respond_to? :to_str
|
|
||||||
end
|
|
||||||
|
|
||||||
def make_stylesheet css, in_tag = false # :nodoc:
|
|
||||||
sheet = css.stylesheet
|
|
||||||
sheet = <<-'CSS' if in_tag
|
|
||||||
<style type="text/css">
|
|
||||||
#{sheet}
|
|
||||||
</style>
|
|
||||||
CSS
|
|
||||||
sheet
|
|
||||||
end
|
|
||||||
|
|
||||||
def page_template_for_css css # :nodoc:
|
|
||||||
sheet = make_stylesheet css
|
|
||||||
PAGE.apply 'CSS', sheet
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
def wrapped_in? element
|
|
||||||
wrapped_in == element
|
|
||||||
end
|
|
||||||
|
|
||||||
def wrapped_in
|
|
||||||
@wrapped_in ||= nil
|
|
||||||
end
|
|
||||||
attr_writer :wrapped_in
|
|
||||||
|
|
||||||
def wrap_in! template
|
|
||||||
Template.wrap! self, template, 'CONTENT'
|
|
||||||
self
|
|
||||||
end
|
|
||||||
|
|
||||||
def apply_title! title
|
|
||||||
self.sub!(/(<title>)(<\/title>)/) { $1 + title + $2 }
|
|
||||||
self
|
|
||||||
end
|
|
||||||
|
|
||||||
def wrap! element, *args
|
|
||||||
return self if not element or element == wrapped_in
|
|
||||||
case element
|
|
||||||
when :div
|
|
||||||
raise "Can't wrap %p in %p" % [wrapped_in, element] unless wrapped_in? nil
|
|
||||||
wrap_in! DIV
|
|
||||||
when :span
|
|
||||||
raise "Can't wrap %p in %p" % [wrapped_in, element] unless wrapped_in? nil
|
|
||||||
wrap_in! SPAN
|
|
||||||
when :page
|
|
||||||
wrap! :div if wrapped_in? nil
|
|
||||||
raise "Can't wrap %p in %p" % [wrapped_in, element] unless wrapped_in? :div
|
|
||||||
wrap_in! Output.page_template_for_css(@css)
|
|
||||||
if args.first.is_a?(Hash) && title = args.first[:title]
|
|
||||||
apply_title! title
|
|
||||||
end
|
|
||||||
self
|
|
||||||
when nil
|
|
||||||
return self
|
|
||||||
else
|
|
||||||
raise "Unknown value %p for :wrap" % element
|
|
||||||
end
|
|
||||||
@wrapped_in = element
|
|
||||||
self
|
|
||||||
end
|
|
||||||
|
|
||||||
def stylesheet in_tag = false
|
|
||||||
Output.make_stylesheet @css, in_tag
|
|
||||||
end
|
|
||||||
|
|
||||||
#-- don't include the templates in docu
|
|
||||||
|
|
||||||
class Template < String # :nodoc:
|
|
||||||
|
|
||||||
def self.wrap! str, template, target
|
|
||||||
target = Regexp.new(Regexp.escape("<%#{target}%>"))
|
|
||||||
if template =~ target
|
|
||||||
str[0,0] = $`
|
|
||||||
str << $'
|
|
||||||
else
|
|
||||||
raise "Template target <%%%p%%> not found" % target
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
def apply target, replacement
|
|
||||||
target = Regexp.new(Regexp.escape("<%#{target}%>"))
|
|
||||||
if self =~ target
|
|
||||||
Template.new($` + replacement + $')
|
|
||||||
else
|
|
||||||
raise "Template target <%%%p%%> not found" % target
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
SPAN = Template.new '<span class="CodeRay"><%CONTENT%></span>'
|
|
||||||
|
|
||||||
DIV = Template.new <<-DIV
|
|
||||||
<div class="CodeRay">
|
|
||||||
<div class="code"><pre><%CONTENT%></pre></div>
|
|
||||||
</div>
|
|
||||||
DIV
|
|
||||||
|
|
||||||
TABLE = Template.new <<-TABLE
|
|
||||||
<table class="CodeRay"><tr>
|
|
||||||
<td class="line-numbers" title="double click to toggle" ondblclick="with (this.firstChild.style) { display = (display == '') ? 'none' : '' }"><pre><%LINE_NUMBERS%></pre></td>
|
|
||||||
<td class="code"><pre><%CONTENT%></pre></td>
|
|
||||||
</tr></table>
|
|
||||||
TABLE
|
|
||||||
|
|
||||||
PAGE = Template.new <<-PAGE
|
|
||||||
<!DOCTYPE html>
|
|
||||||
<html>
|
|
||||||
<head>
|
|
||||||
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
|
|
||||||
<title></title>
|
|
||||||
<style type="text/css">
|
|
||||||
.CodeRay .line-numbers a {
|
|
||||||
text-decoration: inherit;
|
|
||||||
color: inherit;
|
|
||||||
}
|
|
||||||
<%CSS%>
|
|
||||||
</style>
|
|
||||||
</head>
|
|
||||||
<body style="background-color: white;">
|
|
||||||
|
|
||||||
<%CONTENT%>
|
|
||||||
</body>
|
|
||||||
</html>
|
|
||||||
PAGE
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,83 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Encoders
|
|
||||||
|
|
||||||
# A simple JSON Encoder.
|
|
||||||
#
|
|
||||||
# Example:
|
|
||||||
# CodeRay.scan('puts "Hello world!"', :ruby).json
|
|
||||||
# yields
|
|
||||||
# [
|
|
||||||
# {"type"=>"text", "text"=>"puts", "kind"=>"ident"},
|
|
||||||
# {"type"=>"text", "text"=>" ", "kind"=>"space"},
|
|
||||||
# {"type"=>"block", "action"=>"open", "kind"=>"string"},
|
|
||||||
# {"type"=>"text", "text"=>"\"", "kind"=>"delimiter"},
|
|
||||||
# {"type"=>"text", "text"=>"Hello world!", "kind"=>"content"},
|
|
||||||
# {"type"=>"text", "text"=>"\"", "kind"=>"delimiter"},
|
|
||||||
# {"type"=>"block", "action"=>"close", "kind"=>"string"},
|
|
||||||
# ]
|
|
||||||
class JSON < Encoder
|
|
||||||
|
|
||||||
begin
|
|
||||||
require 'json'
|
|
||||||
rescue LoadError
|
|
||||||
begin
|
|
||||||
require 'rubygems' unless defined? Gem
|
|
||||||
gem 'json'
|
|
||||||
require 'json'
|
|
||||||
rescue LoadError
|
|
||||||
$stderr.puts "The JSON encoder needs the JSON library.\n" \
|
|
||||||
"Please gem install json."
|
|
||||||
raise
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
register_for :json
|
|
||||||
FILE_EXTENSION = 'json'
|
|
||||||
|
|
||||||
protected
|
|
||||||
def setup options
|
|
||||||
super
|
|
||||||
|
|
||||||
@first = true
|
|
||||||
@out << '['
|
|
||||||
end
|
|
||||||
|
|
||||||
def finish options
|
|
||||||
@out << ']'
|
|
||||||
end
|
|
||||||
|
|
||||||
def append data
|
|
||||||
if @first
|
|
||||||
@first = false
|
|
||||||
else
|
|
||||||
@out << ','
|
|
||||||
end
|
|
||||||
|
|
||||||
@out << data.to_json
|
|
||||||
end
|
|
||||||
|
|
||||||
public
|
|
||||||
def text_token text, kind
|
|
||||||
append :type => 'text', :text => text, :kind => kind
|
|
||||||
end
|
|
||||||
|
|
||||||
def begin_group kind
|
|
||||||
append :type => 'block', :action => 'open', :kind => kind
|
|
||||||
end
|
|
||||||
|
|
||||||
def end_group kind
|
|
||||||
append :type => 'block', :action => 'close', :kind => kind
|
|
||||||
end
|
|
||||||
|
|
||||||
def begin_line kind
|
|
||||||
append :type => 'block', :action => 'begin_line', :kind => kind
|
|
||||||
end
|
|
||||||
|
|
||||||
def end_line kind
|
|
||||||
append :type => 'block', :action => 'end_line', :kind => kind
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,45 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Encoders
|
|
||||||
|
|
||||||
# Counts the LoC (Lines of Code). Returns an Integer >= 0.
|
|
||||||
#
|
|
||||||
# Alias: +loc+
|
|
||||||
#
|
|
||||||
# Everything that is not comment, markup, doctype/shebang, or an empty line,
|
|
||||||
# is considered to be code.
|
|
||||||
#
|
|
||||||
# For example,
|
|
||||||
# * HTML files not containing JavaScript have 0 LoC
|
|
||||||
# * in a Java class without comments, LoC is the number of non-empty lines
|
|
||||||
#
|
|
||||||
# A Scanner class should define the token kinds that are not code in the
|
|
||||||
# KINDS_NOT_LOC constant, which defaults to [:comment, :doctype].
|
|
||||||
class LinesOfCode < TokenKindFilter
|
|
||||||
|
|
||||||
register_for :lines_of_code
|
|
||||||
|
|
||||||
NON_EMPTY_LINE = /^\s*\S.*$/
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def setup options
|
|
||||||
if scanner
|
|
||||||
kinds_not_loc = scanner.class::KINDS_NOT_LOC
|
|
||||||
else
|
|
||||||
warn "Tokens have no associated scanner, counting all nonempty lines." if $VERBOSE
|
|
||||||
kinds_not_loc = CodeRay::Scanners::Scanner::KINDS_NOT_LOC
|
|
||||||
end
|
|
||||||
|
|
||||||
options[:exclude] = kinds_not_loc
|
|
||||||
|
|
||||||
super options
|
|
||||||
end
|
|
||||||
|
|
||||||
def finish options
|
|
||||||
output @tokens.text.scan(NON_EMPTY_LINE).size
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,18 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Encoders
|
|
||||||
|
|
||||||
# = Null Encoder
|
|
||||||
#
|
|
||||||
# Does nothing and returns an empty string.
|
|
||||||
class Null < Encoder
|
|
||||||
|
|
||||||
register_for :null
|
|
||||||
|
|
||||||
def text_token text, kind
|
|
||||||
# do nothing
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,24 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Encoders
|
|
||||||
|
|
||||||
load :html
|
|
||||||
|
|
||||||
# Wraps the output into a HTML page, using CSS classes and
|
|
||||||
# line numbers in the table format by default.
|
|
||||||
#
|
|
||||||
# See Encoders::HTML for available options.
|
|
||||||
class Page < HTML
|
|
||||||
|
|
||||||
FILE_EXTENSION = 'html'
|
|
||||||
|
|
||||||
register_for :page
|
|
||||||
|
|
||||||
DEFAULT_OPTIONS = HTML::DEFAULT_OPTIONS.merge \
|
|
||||||
:css => :class,
|
|
||||||
:wrap => :page,
|
|
||||||
:line_numbers => :table
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,23 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Encoders
|
|
||||||
|
|
||||||
load :html
|
|
||||||
|
|
||||||
# Wraps HTML output into a SPAN element, using inline styles by default.
|
|
||||||
#
|
|
||||||
# See Encoders::HTML for available options.
|
|
||||||
class Span < HTML
|
|
||||||
|
|
||||||
FILE_EXTENSION = 'span.html'
|
|
||||||
|
|
||||||
register_for :span
|
|
||||||
|
|
||||||
DEFAULT_OPTIONS = HTML::DEFAULT_OPTIONS.merge \
|
|
||||||
:css => :style,
|
|
||||||
:wrap => :span,
|
|
||||||
:line_numbers => false
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,96 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Encoders
|
|
||||||
|
|
||||||
# Makes a statistic for the given tokens.
|
|
||||||
#
|
|
||||||
# Alias: +stats+
|
|
||||||
class Statistic < Encoder
|
|
||||||
|
|
||||||
register_for :statistic
|
|
||||||
|
|
||||||
attr_reader :type_stats, :real_token_count # :nodoc:
|
|
||||||
|
|
||||||
TypeStats = Struct.new :count, :size # :nodoc:
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def setup options
|
|
||||||
super
|
|
||||||
|
|
||||||
@type_stats = Hash.new { |h, k| h[k] = TypeStats.new 0, 0 }
|
|
||||||
@real_token_count = 0
|
|
||||||
end
|
|
||||||
|
|
||||||
STATS = <<-STATS # :nodoc:
|
|
||||||
|
|
||||||
Code Statistics
|
|
||||||
|
|
||||||
Tokens %8d
|
|
||||||
Non-Whitespace %8d
|
|
||||||
Bytes Total %8d
|
|
||||||
|
|
||||||
Token Types (%d):
|
|
||||||
type count ratio size (average)
|
|
||||||
-------------------------------------------------------------
|
|
||||||
%s
|
|
||||||
STATS
|
|
||||||
|
|
||||||
TOKEN_TYPES_ROW = <<-TKR # :nodoc:
|
|
||||||
%-20s %8d %6.2f %% %5.1f
|
|
||||||
TKR
|
|
||||||
|
|
||||||
def finish options
|
|
||||||
all = @type_stats['TOTAL']
|
|
||||||
all_count, all_size = all.count, all.size
|
|
||||||
@type_stats.each do |type, stat|
|
|
||||||
stat.size /= stat.count.to_f
|
|
||||||
end
|
|
||||||
types_stats = @type_stats.sort_by { |k, v| [-v.count, k.to_s] }.map do |k, v|
|
|
||||||
TOKEN_TYPES_ROW % [k, v.count, 100.0 * v.count / all_count, v.size]
|
|
||||||
end.join
|
|
||||||
@out << STATS % [
|
|
||||||
all_count, @real_token_count, all_size,
|
|
||||||
@type_stats.delete_if { |k, v| k.is_a? String }.size,
|
|
||||||
types_stats
|
|
||||||
]
|
|
||||||
|
|
||||||
super
|
|
||||||
end
|
|
||||||
|
|
||||||
public
|
|
||||||
|
|
||||||
def text_token text, kind
|
|
||||||
@real_token_count += 1 unless kind == :space
|
|
||||||
@type_stats[kind].count += 1
|
|
||||||
@type_stats[kind].size += text.size
|
|
||||||
@type_stats['TOTAL'].size += text.size
|
|
||||||
@type_stats['TOTAL'].count += 1
|
|
||||||
end
|
|
||||||
|
|
||||||
# TODO Hierarchy handling
|
|
||||||
def begin_group kind
|
|
||||||
block_token ':begin_group', kind
|
|
||||||
end
|
|
||||||
|
|
||||||
def end_group kind
|
|
||||||
block_token ':end_group', kind
|
|
||||||
end
|
|
||||||
|
|
||||||
def begin_line kind
|
|
||||||
block_token ':begin_line', kind
|
|
||||||
end
|
|
||||||
|
|
||||||
def end_line kind
|
|
||||||
block_token ':end_line', kind
|
|
||||||
end
|
|
||||||
|
|
||||||
def block_token action, kind
|
|
||||||
@type_stats['TOTAL'].count += 1
|
|
||||||
@type_stats[action].count += 1
|
|
||||||
@type_stats[kind].count += 1
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,179 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Encoders
|
|
||||||
|
|
||||||
# Outputs code highlighted for a color terminal.
|
|
||||||
#
|
|
||||||
# Note: This encoder is in beta. It currently doesn't use the Styles.
|
|
||||||
#
|
|
||||||
# Alias: +term+
|
|
||||||
#
|
|
||||||
# == Authors & License
|
|
||||||
#
|
|
||||||
# By Rob Aldred (http://robaldred.co.uk)
|
|
||||||
#
|
|
||||||
# Based on idea by Nathan Weizenbaum (http://nex-3.com)
|
|
||||||
#
|
|
||||||
# MIT License (http://www.opensource.org/licenses/mit-license.php)
|
|
||||||
class Terminal < Encoder
|
|
||||||
|
|
||||||
register_for :terminal
|
|
||||||
|
|
||||||
TOKEN_COLORS = {
|
|
||||||
:annotation => '35',
|
|
||||||
:attribute_name => '33',
|
|
||||||
:attribute_value => '31',
|
|
||||||
:binary => '1;35',
|
|
||||||
:char => {
|
|
||||||
:self => '36', :delimiter => '34'
|
|
||||||
},
|
|
||||||
:class => '1;35',
|
|
||||||
:class_variable => '36',
|
|
||||||
:color => '32',
|
|
||||||
:comment => '37',
|
|
||||||
:complex => '34',
|
|
||||||
:constant => ['34', '4'],
|
|
||||||
:decoration => '35',
|
|
||||||
:definition => '1;32',
|
|
||||||
:directive => ['32', '4'],
|
|
||||||
:doc => '46',
|
|
||||||
:doctype => '1;30',
|
|
||||||
:doc_string => ['31', '4'],
|
|
||||||
:entity => '33',
|
|
||||||
:error => ['1;33', '41'],
|
|
||||||
:exception => '1;31',
|
|
||||||
:float => '1;35',
|
|
||||||
:function => '1;34',
|
|
||||||
:global_variable => '42',
|
|
||||||
:hex => '1;36',
|
|
||||||
:include => '33',
|
|
||||||
:integer => '1;34',
|
|
||||||
:key => '35',
|
|
||||||
:label => '1;15',
|
|
||||||
:local_variable => '33',
|
|
||||||
:octal => '1;35',
|
|
||||||
:operator_name => '1;29',
|
|
||||||
:predefined_constant => '1;36',
|
|
||||||
:predefined_type => '1;30',
|
|
||||||
:predefined => ['4', '1;34'],
|
|
||||||
:preprocessor => '36',
|
|
||||||
:pseudo_class => '34',
|
|
||||||
:regexp => {
|
|
||||||
:self => '31',
|
|
||||||
:content => '31',
|
|
||||||
:delimiter => '1;29',
|
|
||||||
:modifier => '35',
|
|
||||||
:function => '1;29'
|
|
||||||
},
|
|
||||||
:reserved => '1;31',
|
|
||||||
:shell => {
|
|
||||||
:self => '42',
|
|
||||||
:content => '1;29',
|
|
||||||
:delimiter => '37',
|
|
||||||
},
|
|
||||||
:string => {
|
|
||||||
:self => '32',
|
|
||||||
:modifier => '1;32',
|
|
||||||
:escape => '1;36',
|
|
||||||
:delimiter => '1;32',
|
|
||||||
},
|
|
||||||
:symbol => '1;32',
|
|
||||||
:tag => '34',
|
|
||||||
:type => '1;34',
|
|
||||||
:value => '36',
|
|
||||||
:variable => '34',
|
|
||||||
|
|
||||||
:insert => '42',
|
|
||||||
:delete => '41',
|
|
||||||
:change => '44',
|
|
||||||
:head => '45'
|
|
||||||
}
|
|
||||||
TOKEN_COLORS[:keyword] = TOKEN_COLORS[:reserved]
|
|
||||||
TOKEN_COLORS[:method] = TOKEN_COLORS[:function]
|
|
||||||
TOKEN_COLORS[:imaginary] = TOKEN_COLORS[:complex]
|
|
||||||
TOKEN_COLORS[:begin_group] = TOKEN_COLORS[:end_group] =
|
|
||||||
TOKEN_COLORS[:escape] = TOKEN_COLORS[:delimiter]
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def setup(options)
|
|
||||||
super
|
|
||||||
@opened = []
|
|
||||||
@subcolors = nil
|
|
||||||
end
|
|
||||||
|
|
||||||
public
|
|
||||||
|
|
||||||
def text_token text, kind
|
|
||||||
if color = (@subcolors || TOKEN_COLORS)[kind]
|
|
||||||
if Hash === color
|
|
||||||
if color[:self]
|
|
||||||
color = color[:self]
|
|
||||||
else
|
|
||||||
@out << text
|
|
||||||
return
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
@out << ansi_colorize(color)
|
|
||||||
@out << text.gsub("\n", ansi_clear + "\n" + ansi_colorize(color))
|
|
||||||
@out << ansi_clear
|
|
||||||
@out << ansi_colorize(@subcolors[:self]) if @subcolors && @subcolors[:self]
|
|
||||||
else
|
|
||||||
@out << text
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
def begin_group kind
|
|
||||||
@opened << kind
|
|
||||||
@out << open_token(kind)
|
|
||||||
end
|
|
||||||
alias begin_line begin_group
|
|
||||||
|
|
||||||
def end_group kind
|
|
||||||
if @opened.empty?
|
|
||||||
# nothing to close
|
|
||||||
else
|
|
||||||
@opened.pop
|
|
||||||
@out << ansi_clear
|
|
||||||
@out << open_token(@opened.last)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
def end_line kind
|
|
||||||
if @opened.empty?
|
|
||||||
# nothing to close
|
|
||||||
else
|
|
||||||
@opened.pop
|
|
||||||
# whole lines to be highlighted,
|
|
||||||
# eg. added/modified/deleted lines in a diff
|
|
||||||
@out << "\t" * 100 + ansi_clear
|
|
||||||
@out << open_token(@opened.last)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
private
|
|
||||||
|
|
||||||
def open_token kind
|
|
||||||
if color = TOKEN_COLORS[kind]
|
|
||||||
if Hash === color
|
|
||||||
@subcolors = color
|
|
||||||
ansi_colorize(color[:self]) if color[:self]
|
|
||||||
else
|
|
||||||
@subcolors = {}
|
|
||||||
ansi_colorize(color)
|
|
||||||
end
|
|
||||||
else
|
|
||||||
@subcolors = nil
|
|
||||||
''
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
def ansi_colorize(color)
|
|
||||||
Array(color).map { |c| "\e[#{c}m" }.join
|
|
||||||
end
|
|
||||||
def ansi_clear
|
|
||||||
ansi_colorize(0)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,46 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Encoders
|
|
||||||
|
|
||||||
# Concats the tokens into a single string, resulting in the original
|
|
||||||
# code string if no tokens were removed.
|
|
||||||
#
|
|
||||||
# Alias: +plain+, +plaintext+
|
|
||||||
#
|
|
||||||
# == Options
|
|
||||||
#
|
|
||||||
# === :separator
|
|
||||||
# A separator string to join the tokens.
|
|
||||||
#
|
|
||||||
# Default: empty String
|
|
||||||
class Text < Encoder
|
|
||||||
|
|
||||||
register_for :text
|
|
||||||
|
|
||||||
FILE_EXTENSION = 'txt'
|
|
||||||
|
|
||||||
DEFAULT_OPTIONS = {
|
|
||||||
:separator => nil
|
|
||||||
}
|
|
||||||
|
|
||||||
def text_token text, kind
|
|
||||||
super
|
|
||||||
|
|
||||||
if @first
|
|
||||||
@first = false
|
|
||||||
else
|
|
||||||
@out << @sep
|
|
||||||
end if @sep
|
|
||||||
end
|
|
||||||
|
|
||||||
protected
|
|
||||||
def setup options
|
|
||||||
super
|
|
||||||
|
|
||||||
@first = true
|
|
||||||
@sep = options[:separator]
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,111 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Encoders
|
|
||||||
|
|
||||||
load :filter
|
|
||||||
|
|
||||||
# A Filter that selects tokens based on their token kind.
|
|
||||||
#
|
|
||||||
# == Options
|
|
||||||
#
|
|
||||||
# === :exclude
|
|
||||||
#
|
|
||||||
# One or many symbols (in an Array) which shall be excluded.
|
|
||||||
#
|
|
||||||
# Default: []
|
|
||||||
#
|
|
||||||
# === :include
|
|
||||||
#
|
|
||||||
# One or many symbols (in an array) which shall be included.
|
|
||||||
#
|
|
||||||
# Default: :all, which means all tokens are included.
|
|
||||||
#
|
|
||||||
# Exclusion wins over inclusion.
|
|
||||||
#
|
|
||||||
# See also: CommentFilter
|
|
||||||
class TokenKindFilter < Filter
|
|
||||||
|
|
||||||
register_for :token_kind_filter
|
|
||||||
|
|
||||||
DEFAULT_OPTIONS = {
|
|
||||||
:exclude => [],
|
|
||||||
:include => :all
|
|
||||||
}
|
|
||||||
|
|
||||||
protected
|
|
||||||
def setup options
|
|
||||||
super
|
|
||||||
|
|
||||||
@group_excluded = false
|
|
||||||
@exclude = options[:exclude]
|
|
||||||
@exclude = Array(@exclude) unless @exclude == :all
|
|
||||||
@include = options[:include]
|
|
||||||
@include = Array(@include) unless @include == :all
|
|
||||||
end
|
|
||||||
|
|
||||||
def include_text_token? text, kind
|
|
||||||
include_group? kind
|
|
||||||
end
|
|
||||||
|
|
||||||
def include_group? kind
|
|
||||||
(@include == :all || @include.include?(kind)) &&
|
|
||||||
!(@exclude == :all || @exclude.include?(kind))
|
|
||||||
end
|
|
||||||
|
|
||||||
public
|
|
||||||
|
|
||||||
# Add the token to the output stream if +kind+ matches the conditions.
|
|
||||||
def text_token text, kind
|
|
||||||
super if !@group_excluded && include_text_token?(text, kind)
|
|
||||||
end
|
|
||||||
|
|
||||||
# Add the token group to the output stream if +kind+ matches the
|
|
||||||
# conditions.
|
|
||||||
#
|
|
||||||
# If it does not, all tokens inside the group are excluded from the
|
|
||||||
# stream, even if their kinds match.
|
|
||||||
def begin_group kind
|
|
||||||
if @group_excluded
|
|
||||||
@group_excluded += 1
|
|
||||||
elsif include_group? kind
|
|
||||||
super
|
|
||||||
else
|
|
||||||
@group_excluded = 1
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# See +begin_group+.
|
|
||||||
def begin_line kind
|
|
||||||
if @group_excluded
|
|
||||||
@group_excluded += 1
|
|
||||||
elsif include_group? kind
|
|
||||||
super
|
|
||||||
else
|
|
||||||
@group_excluded = 1
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Take care of re-enabling the delegation of tokens to the output stream
|
|
||||||
# if an exluded group has ended.
|
|
||||||
def end_group kind
|
|
||||||
if @group_excluded
|
|
||||||
@group_excluded -= 1
|
|
||||||
@group_excluded = false if @group_excluded.zero?
|
|
||||||
else
|
|
||||||
super
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# See +end_group+.
|
|
||||||
def end_line kind
|
|
||||||
if @group_excluded
|
|
||||||
@group_excluded -= 1
|
|
||||||
@group_excluded = false if @group_excluded.zero?
|
|
||||||
else
|
|
||||||
super
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,72 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Encoders
|
|
||||||
|
|
||||||
# = XML Encoder
|
|
||||||
#
|
|
||||||
# Uses REXML. Very slow.
|
|
||||||
class XML < Encoder
|
|
||||||
|
|
||||||
register_for :xml
|
|
||||||
|
|
||||||
FILE_EXTENSION = 'xml'
|
|
||||||
|
|
||||||
autoload :REXML, 'rexml/document'
|
|
||||||
|
|
||||||
DEFAULT_OPTIONS = {
|
|
||||||
:tab_width => 8,
|
|
||||||
:pretty => -1,
|
|
||||||
:transitive => false,
|
|
||||||
}
|
|
||||||
|
|
||||||
protected
|
|
||||||
def setup options
|
|
||||||
super
|
|
||||||
|
|
||||||
@doc = REXML::Document.new
|
|
||||||
@doc << REXML::XMLDecl.new
|
|
||||||
@tab_width = options[:tab_width]
|
|
||||||
@root = @node = @doc.add_element('coderay-tokens')
|
|
||||||
end
|
|
||||||
|
|
||||||
def finish options
|
|
||||||
@doc.write @out, options[:pretty], options[:transitive], true
|
|
||||||
|
|
||||||
super
|
|
||||||
end
|
|
||||||
|
|
||||||
public
|
|
||||||
def text_token text, kind
|
|
||||||
if kind == :space
|
|
||||||
token = @node
|
|
||||||
else
|
|
||||||
token = @node.add_element kind.to_s
|
|
||||||
end
|
|
||||||
text.scan(/(\x20+)|(\t+)|(\n)|[^\x20\t\n]+/) do |space, tab, nl|
|
|
||||||
case
|
|
||||||
when space
|
|
||||||
token << REXML::Text.new(space, true)
|
|
||||||
when tab
|
|
||||||
token << REXML::Text.new(tab, true)
|
|
||||||
when nl
|
|
||||||
token << REXML::Text.new(nl, true)
|
|
||||||
else
|
|
||||||
token << REXML::Text.new($&)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
def begin_group kind
|
|
||||||
@node = @node.add_element kind.to_s
|
|
||||||
end
|
|
||||||
|
|
||||||
def end_group kind
|
|
||||||
if @node == @root
|
|
||||||
raise 'no token to close!'
|
|
||||||
end
|
|
||||||
@node = @node.parent
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,50 +0,0 @@
|
||||||
autoload :YAML, 'yaml'
|
|
||||||
|
|
||||||
module CodeRay
|
|
||||||
module Encoders
|
|
||||||
|
|
||||||
# = YAML Encoder
|
|
||||||
#
|
|
||||||
# Slow.
|
|
||||||
class YAML < Encoder
|
|
||||||
|
|
||||||
register_for :yaml
|
|
||||||
|
|
||||||
FILE_EXTENSION = 'yaml'
|
|
||||||
|
|
||||||
protected
|
|
||||||
def setup options
|
|
||||||
super
|
|
||||||
|
|
||||||
@data = []
|
|
||||||
end
|
|
||||||
|
|
||||||
def finish options
|
|
||||||
output ::YAML.dump(@data)
|
|
||||||
end
|
|
||||||
|
|
||||||
public
|
|
||||||
def text_token text, kind
|
|
||||||
@data << [text, kind]
|
|
||||||
end
|
|
||||||
|
|
||||||
def begin_group kind
|
|
||||||
@data << [:begin_group, kind]
|
|
||||||
end
|
|
||||||
|
|
||||||
def end_group kind
|
|
||||||
@data << [:end_group, kind]
|
|
||||||
end
|
|
||||||
|
|
||||||
def begin_line kind
|
|
||||||
@data << [:begin_line, kind]
|
|
||||||
end
|
|
||||||
|
|
||||||
def end_line kind
|
|
||||||
@data << [:end_line, kind]
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,95 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
|
|
||||||
# A little hack to enable CodeRay highlighting in RedCloth.
|
|
||||||
#
|
|
||||||
# Usage:
|
|
||||||
# require 'coderay'
|
|
||||||
# require 'coderay/for_redcloth'
|
|
||||||
# RedCloth.new('@[ruby]puts "Hello, World!"@').to_html
|
|
||||||
#
|
|
||||||
# Make sure you have RedCloth 4.0.3 activated, for example by calling
|
|
||||||
# require 'rubygems'
|
|
||||||
# before RedCloth is loaded and before calling CodeRay.for_redcloth.
|
|
||||||
module ForRedCloth
|
|
||||||
|
|
||||||
def self.install
|
|
||||||
gem 'RedCloth', '>= 4.0.3' if defined? gem
|
|
||||||
require 'redcloth'
|
|
||||||
unless RedCloth::VERSION.to_s >= '4.0.3'
|
|
||||||
if defined? gem
|
|
||||||
raise 'CodeRay.for_redcloth needs RedCloth version 4.0.3 or later. ' +
|
|
||||||
"You have #{RedCloth::VERSION}. Please gem install RedCloth."
|
|
||||||
else
|
|
||||||
$".delete 'redcloth.rb' # sorry, but it works
|
|
||||||
require 'rubygems'
|
|
||||||
return install # retry
|
|
||||||
end
|
|
||||||
end
|
|
||||||
unless RedCloth::VERSION.to_s >= '4.2.2'
|
|
||||||
warn 'CodeRay.for_redcloth works best with RedCloth version 4.2.2 or later.'
|
|
||||||
end
|
|
||||||
RedCloth::TextileDoc.send :include, ForRedCloth::TextileDoc
|
|
||||||
RedCloth::Formatters::HTML.module_eval do
|
|
||||||
def unescape(html) # :nodoc:
|
|
||||||
replacements = {
|
|
||||||
'&' => '&',
|
|
||||||
'"' => '"',
|
|
||||||
'>' => '>',
|
|
||||||
'<' => '<',
|
|
||||||
}
|
|
||||||
html.gsub(/&(?:amp|quot|[gl]t);/) { |entity| replacements[entity] }
|
|
||||||
end
|
|
||||||
undef code, bc_open, bc_close, escape_pre
|
|
||||||
def code(opts) # :nodoc:
|
|
||||||
opts[:block] = true
|
|
||||||
if !opts[:lang] && RedCloth::VERSION.to_s >= '4.2.0'
|
|
||||||
# simulating pre-4.2 behavior
|
|
||||||
if opts[:text].sub!(/\A\[(\w+)\]/, '')
|
|
||||||
if CodeRay::Scanners[$1].lang == :text
|
|
||||||
opts[:text] = $& + opts[:text]
|
|
||||||
else
|
|
||||||
opts[:lang] = $1
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
if opts[:lang] && !filter_coderay
|
|
||||||
require 'coderay'
|
|
||||||
@in_bc ||= nil
|
|
||||||
format = @in_bc ? :div : :span
|
|
||||||
opts[:text] = unescape(opts[:text]) unless @in_bc
|
|
||||||
highlighted_code = CodeRay.encode opts[:text], opts[:lang], format
|
|
||||||
highlighted_code.sub!(/\A<(span|div)/) { |m| m + pba(@in_bc || opts) }
|
|
||||||
highlighted_code
|
|
||||||
else
|
|
||||||
"<code#{pba(opts)}>#{opts[:text]}</code>"
|
|
||||||
end
|
|
||||||
end
|
|
||||||
def bc_open(opts) # :nodoc:
|
|
||||||
opts[:block] = true
|
|
||||||
@in_bc = opts
|
|
||||||
opts[:lang] ? '' : "<pre#{pba(opts)}>"
|
|
||||||
end
|
|
||||||
def bc_close(opts) # :nodoc:
|
|
||||||
opts = @in_bc
|
|
||||||
@in_bc = nil
|
|
||||||
opts[:lang] ? '' : "</pre>\n"
|
|
||||||
end
|
|
||||||
def escape_pre(text) # :nodoc:
|
|
||||||
if @in_bc ||= nil
|
|
||||||
text
|
|
||||||
else
|
|
||||||
html_esc(text, :html_escape_preformatted)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
module TextileDoc # :nodoc:
|
|
||||||
attr_accessor :filter_coderay
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
CodeRay::ForRedCloth.install
|
|
|
@ -1,141 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
|
|
||||||
# = FileType
|
|
||||||
#
|
|
||||||
# A simple filetype recognizer.
|
|
||||||
#
|
|
||||||
# == Usage
|
|
||||||
#
|
|
||||||
# # determine the type of the given
|
|
||||||
# lang = FileType[file_name]
|
|
||||||
#
|
|
||||||
# # return :text if the file type is unknown
|
|
||||||
# lang = FileType.fetch file_name, :text
|
|
||||||
#
|
|
||||||
# # try the shebang line, too
|
|
||||||
# lang = FileType.fetch file_name, :text, true
|
|
||||||
module FileType
|
|
||||||
|
|
||||||
UnknownFileType = Class.new Exception
|
|
||||||
|
|
||||||
class << self
|
|
||||||
|
|
||||||
# Try to determine the file type of the file.
|
|
||||||
#
|
|
||||||
# +filename+ is a relative or absolute path to a file.
|
|
||||||
#
|
|
||||||
# The file itself is only accessed when +read_shebang+ is set to true.
|
|
||||||
# That means you can get filetypes from files that don't exist.
|
|
||||||
def [] filename, read_shebang = false
|
|
||||||
name = File.basename filename
|
|
||||||
ext = File.extname(name).sub(/^\./, '') # from last dot, delete the leading dot
|
|
||||||
ext2 = filename.to_s[/\.(.*)/, 1] # from first dot
|
|
||||||
|
|
||||||
type =
|
|
||||||
TypeFromExt[ext] ||
|
|
||||||
TypeFromExt[ext.downcase] ||
|
|
||||||
(TypeFromExt[ext2] if ext2) ||
|
|
||||||
(TypeFromExt[ext2.downcase] if ext2) ||
|
|
||||||
TypeFromName[name] ||
|
|
||||||
TypeFromName[name.downcase]
|
|
||||||
type ||= shebang(filename) if read_shebang
|
|
||||||
|
|
||||||
type
|
|
||||||
end
|
|
||||||
|
|
||||||
# This works like Hash#fetch.
|
|
||||||
#
|
|
||||||
# If the filetype cannot be found, the +default+ value
|
|
||||||
# is returned.
|
|
||||||
def fetch filename, default = nil, read_shebang = false
|
|
||||||
if default && block_given?
|
|
||||||
warn 'Block supersedes default value argument; use either.'
|
|
||||||
end
|
|
||||||
|
|
||||||
if type = self[filename, read_shebang]
|
|
||||||
type
|
|
||||||
else
|
|
||||||
return yield if block_given?
|
|
||||||
return default if default
|
|
||||||
raise UnknownFileType, 'Could not determine type of %p.' % filename
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def shebang filename
|
|
||||||
return unless File.exist? filename
|
|
||||||
File.open filename, 'r' do |f|
|
|
||||||
if first_line = f.gets
|
|
||||||
if type = first_line[TypeFromShebang]
|
|
||||||
type.to_sym
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
TypeFromExt = {
|
|
||||||
'c' => :c,
|
|
||||||
'cfc' => :xml,
|
|
||||||
'cfm' => :xml,
|
|
||||||
'clj' => :clojure,
|
|
||||||
'css' => :css,
|
|
||||||
'diff' => :diff,
|
|
||||||
'dpr' => :delphi,
|
|
||||||
'gemspec' => :ruby,
|
|
||||||
'groovy' => :groovy,
|
|
||||||
'gvy' => :groovy,
|
|
||||||
'h' => :c,
|
|
||||||
'haml' => :haml,
|
|
||||||
'htm' => :page,
|
|
||||||
'html' => :page,
|
|
||||||
'html.erb' => :erb,
|
|
||||||
'java' => :java,
|
|
||||||
'js' => :java_script,
|
|
||||||
'json' => :json,
|
|
||||||
'mab' => :ruby,
|
|
||||||
'pas' => :delphi,
|
|
||||||
'patch' => :diff,
|
|
||||||
'php' => :php,
|
|
||||||
'php3' => :php,
|
|
||||||
'php4' => :php,
|
|
||||||
'php5' => :php,
|
|
||||||
'prawn' => :ruby,
|
|
||||||
'py' => :python,
|
|
||||||
'py3' => :python,
|
|
||||||
'pyw' => :python,
|
|
||||||
'rake' => :ruby,
|
|
||||||
'raydebug' => :raydebug,
|
|
||||||
'rb' => :ruby,
|
|
||||||
'rbw' => :ruby,
|
|
||||||
'rhtml' => :erb,
|
|
||||||
'rjs' => :ruby,
|
|
||||||
'rpdf' => :ruby,
|
|
||||||
'ru' => :ruby,
|
|
||||||
'rxml' => :ruby,
|
|
||||||
# 'sch' => :scheme,
|
|
||||||
'sql' => :sql,
|
|
||||||
# 'ss' => :scheme,
|
|
||||||
'xhtml' => :page,
|
|
||||||
'xml' => :xml,
|
|
||||||
'yaml' => :yaml,
|
|
||||||
'yml' => :yaml,
|
|
||||||
}
|
|
||||||
for cpp_alias in %w[cc cpp cp cxx c++ C hh hpp h++ cu]
|
|
||||||
TypeFromExt[cpp_alias] = :cpp
|
|
||||||
end
|
|
||||||
|
|
||||||
TypeFromShebang = /\b(?:ruby|perl|python|sh)\b/
|
|
||||||
|
|
||||||
TypeFromName = {
|
|
||||||
'Capfile' => :ruby,
|
|
||||||
'Rakefile' => :ruby,
|
|
||||||
'Rantfile' => :ruby,
|
|
||||||
'Gemfile' => :ruby,
|
|
||||||
}
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
|
@ -1,41 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
|
|
||||||
# A simplified interface to the gzip library +zlib+ (from the Ruby Standard Library.)
|
|
||||||
module GZip
|
|
||||||
|
|
||||||
require 'zlib'
|
|
||||||
|
|
||||||
# The default zipping level. 7 zips good and fast.
|
|
||||||
DEFAULT_GZIP_LEVEL = 7
|
|
||||||
|
|
||||||
# Unzips the given string +s+.
|
|
||||||
#
|
|
||||||
# Example:
|
|
||||||
# require 'gzip_simple'
|
|
||||||
# print GZip.gunzip(File.read('adresses.gz'))
|
|
||||||
def GZip.gunzip s
|
|
||||||
Zlib::Inflate.inflate s
|
|
||||||
end
|
|
||||||
|
|
||||||
# Zips the given string +s+.
|
|
||||||
#
|
|
||||||
# Example:
|
|
||||||
# require 'gzip_simple'
|
|
||||||
# File.open('adresses.gz', 'w') do |file
|
|
||||||
# file.write GZip.gzip('Mum: 0123 456 789', 9)
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# If you provide a +level+, you can control how strong
|
|
||||||
# the string is compressed:
|
|
||||||
# - 0: no compression, only convert to gzip format
|
|
||||||
# - 1: compress fast
|
|
||||||
# - 7: compress more, but still fast (default)
|
|
||||||
# - 8: compress more, slower
|
|
||||||
# - 9: compress best, very slow
|
|
||||||
def GZip.gzip s, level = DEFAULT_GZIP_LEVEL
|
|
||||||
Zlib::Deflate.new(level).deflate s, Zlib::FINISH
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
|
@ -1,284 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
|
|
||||||
# = PluginHost
|
|
||||||
#
|
|
||||||
# A simple subclass/subfolder plugin system.
|
|
||||||
#
|
|
||||||
# Example:
|
|
||||||
# class Generators
|
|
||||||
# extend PluginHost
|
|
||||||
# plugin_path 'app/generators'
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# class Generator
|
|
||||||
# extend Plugin
|
|
||||||
# PLUGIN_HOST = Generators
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# class FancyGenerator < Generator
|
|
||||||
# register_for :fancy
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# Generators[:fancy] #-> FancyGenerator
|
|
||||||
# # or
|
|
||||||
# CodeRay.require_plugin 'Generators/fancy'
|
|
||||||
# # or
|
|
||||||
# Generators::Fancy
|
|
||||||
module PluginHost
|
|
||||||
|
|
||||||
# Raised if Encoders::[] fails because:
|
|
||||||
# * a file could not be found
|
|
||||||
# * the requested Plugin is not registered
|
|
||||||
PluginNotFound = Class.new LoadError
|
|
||||||
HostNotFound = Class.new LoadError
|
|
||||||
|
|
||||||
PLUGIN_HOSTS = []
|
|
||||||
PLUGIN_HOSTS_BY_ID = {} # dummy hash
|
|
||||||
|
|
||||||
# Loads all plugins using list and load.
|
|
||||||
def load_all
|
|
||||||
for plugin in list
|
|
||||||
load plugin
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Returns the Plugin for +id+.
|
|
||||||
#
|
|
||||||
# Example:
|
|
||||||
# yaml_plugin = MyPluginHost[:yaml]
|
|
||||||
def [] id, *args, &blk
|
|
||||||
plugin = validate_id(id)
|
|
||||||
begin
|
|
||||||
plugin = plugin_hash.[] plugin, *args, &blk
|
|
||||||
end while plugin.is_a? Symbol
|
|
||||||
plugin
|
|
||||||
end
|
|
||||||
|
|
||||||
alias load []
|
|
||||||
|
|
||||||
# Tries to +load+ the missing plugin by translating +const+ to the
|
|
||||||
# underscore form (eg. LinesOfCode becomes lines_of_code).
|
|
||||||
def const_missing const
|
|
||||||
id = const.to_s.
|
|
||||||
gsub(/([A-Z]+)([A-Z][a-z])/,'\1_\2').
|
|
||||||
gsub(/([a-z\d])([A-Z])/,'\1_\2').
|
|
||||||
downcase
|
|
||||||
load id
|
|
||||||
end
|
|
||||||
|
|
||||||
class << self
|
|
||||||
|
|
||||||
# Adds the module/class to the PLUGIN_HOSTS list.
|
|
||||||
def extended mod
|
|
||||||
PLUGIN_HOSTS << mod
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
# The path where the plugins can be found.
|
|
||||||
def plugin_path *args
|
|
||||||
unless args.empty?
|
|
||||||
@plugin_path = File.expand_path File.join(*args)
|
|
||||||
end
|
|
||||||
@plugin_path ||= ''
|
|
||||||
end
|
|
||||||
|
|
||||||
# Map a plugin_id to another.
|
|
||||||
#
|
|
||||||
# Usage: Put this in a file plugin_path/_map.rb.
|
|
||||||
#
|
|
||||||
# class MyColorHost < PluginHost
|
|
||||||
# map :navy => :dark_blue,
|
|
||||||
# :maroon => :brown,
|
|
||||||
# :luna => :moon
|
|
||||||
# end
|
|
||||||
def map hash
|
|
||||||
for from, to in hash
|
|
||||||
from = validate_id from
|
|
||||||
to = validate_id to
|
|
||||||
plugin_hash[from] = to unless plugin_hash.has_key? from
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Define the default plugin to use when no plugin is found
|
|
||||||
# for a given id, or return the default plugin.
|
|
||||||
#
|
|
||||||
# See also map.
|
|
||||||
#
|
|
||||||
# class MyColorHost < PluginHost
|
|
||||||
# map :navy => :dark_blue
|
|
||||||
# default :gray
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# MyColorHost.default # loads and returns the Gray plugin
|
|
||||||
def default id = nil
|
|
||||||
if id
|
|
||||||
id = validate_id id
|
|
||||||
raise "The default plugin can't be named \"default\"." if id == :default
|
|
||||||
plugin_hash[:default] = id
|
|
||||||
else
|
|
||||||
load :default
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Every plugin must register itself for +id+ by calling register_for,
|
|
||||||
# which calls this method.
|
|
||||||
#
|
|
||||||
# See Plugin#register_for.
|
|
||||||
def register plugin, id
|
|
||||||
plugin_hash[validate_id(id)] = plugin
|
|
||||||
end
|
|
||||||
|
|
||||||
# A Hash of plugion_id => Plugin pairs.
|
|
||||||
def plugin_hash
|
|
||||||
@plugin_hash ||= make_plugin_hash
|
|
||||||
end
|
|
||||||
|
|
||||||
# Returns an array of all .rb files in the plugin path.
|
|
||||||
#
|
|
||||||
# The extension .rb is not included.
|
|
||||||
def list
|
|
||||||
Dir[path_to('*')].select do |file|
|
|
||||||
File.basename(file)[/^(?!_)\w+\.rb$/]
|
|
||||||
end.map do |file|
|
|
||||||
File.basename(file, '.rb').to_sym
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Returns an array of all Plugins.
|
|
||||||
#
|
|
||||||
# Note: This loads all plugins using load_all.
|
|
||||||
def all_plugins
|
|
||||||
load_all
|
|
||||||
plugin_hash.values.grep(Class)
|
|
||||||
end
|
|
||||||
|
|
||||||
# Loads the map file (see map).
|
|
||||||
#
|
|
||||||
# This is done automatically when plugin_path is called.
|
|
||||||
def load_plugin_map
|
|
||||||
mapfile = path_to '_map'
|
|
||||||
@plugin_map_loaded = true
|
|
||||||
if File.exist? mapfile
|
|
||||||
require mapfile
|
|
||||||
true
|
|
||||||
else
|
|
||||||
false
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
# Return a plugin hash that automatically loads plugins.
|
|
||||||
def make_plugin_hash
|
|
||||||
@plugin_map_loaded ||= false
|
|
||||||
Hash.new do |h, plugin_id|
|
|
||||||
id = validate_id(plugin_id)
|
|
||||||
path = path_to id
|
|
||||||
begin
|
|
||||||
raise LoadError, "#{path} not found" unless File.exist? path
|
|
||||||
require path
|
|
||||||
rescue LoadError => boom
|
|
||||||
if @plugin_map_loaded
|
|
||||||
if h.has_key?(:default)
|
|
||||||
warn '%p could not load plugin %p; falling back to %p' % [self, id, h[:default]]
|
|
||||||
h[:default]
|
|
||||||
else
|
|
||||||
raise PluginNotFound, '%p could not load plugin %p: %s' % [self, id, boom]
|
|
||||||
end
|
|
||||||
else
|
|
||||||
load_plugin_map
|
|
||||||
h[plugin_id]
|
|
||||||
end
|
|
||||||
else
|
|
||||||
# Plugin should have registered by now
|
|
||||||
if h.has_key? id
|
|
||||||
h[id]
|
|
||||||
else
|
|
||||||
raise PluginNotFound, "No #{self.name} plugin for #{id.inspect} found in #{path}."
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Returns the expected path to the plugin file for the given id.
|
|
||||||
def path_to plugin_id
|
|
||||||
File.join plugin_path, "#{plugin_id}.rb"
|
|
||||||
end
|
|
||||||
|
|
||||||
# Converts +id+ to a Symbol if it is a String,
|
|
||||||
# or returns +id+ if it already is a Symbol.
|
|
||||||
#
|
|
||||||
# Raises +ArgumentError+ for all other objects, or if the
|
|
||||||
# given String includes non-alphanumeric characters (\W).
|
|
||||||
def validate_id id
|
|
||||||
if id.is_a? Symbol or id.nil?
|
|
||||||
id
|
|
||||||
elsif id.is_a? String
|
|
||||||
if id[/\w+/] == id
|
|
||||||
id.downcase.to_sym
|
|
||||||
else
|
|
||||||
raise ArgumentError, "Invalid id given: #{id}"
|
|
||||||
end
|
|
||||||
else
|
|
||||||
raise ArgumentError, "String or Symbol expected, but #{id.class} given."
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
|
|
||||||
# = Plugin
|
|
||||||
#
|
|
||||||
# Plugins have to include this module.
|
|
||||||
#
|
|
||||||
# IMPORTANT: Use extend for this module.
|
|
||||||
#
|
|
||||||
# See CodeRay::PluginHost for examples.
|
|
||||||
module Plugin
|
|
||||||
|
|
||||||
attr_reader :plugin_id
|
|
||||||
|
|
||||||
# Register this class for the given +id+.
|
|
||||||
#
|
|
||||||
# Example:
|
|
||||||
# class MyPlugin < PluginHost::BaseClass
|
|
||||||
# register_for :my_id
|
|
||||||
# ...
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# See PluginHost.register.
|
|
||||||
def register_for id
|
|
||||||
@plugin_id = id
|
|
||||||
plugin_host.register self, id
|
|
||||||
end
|
|
||||||
|
|
||||||
# Returns the title of the plugin, or sets it to the
|
|
||||||
# optional argument +title+.
|
|
||||||
def title title = nil
|
|
||||||
if title
|
|
||||||
@title = title.to_s
|
|
||||||
else
|
|
||||||
@title ||= name[/([^:]+)$/, 1]
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# The PluginHost for this Plugin class.
|
|
||||||
def plugin_host host = nil
|
|
||||||
if host.is_a? PluginHost
|
|
||||||
const_set :PLUGIN_HOST, host
|
|
||||||
end
|
|
||||||
self::PLUGIN_HOST
|
|
||||||
end
|
|
||||||
|
|
||||||
def aliases
|
|
||||||
plugin_host.load_plugin_map
|
|
||||||
plugin_host.plugin_hash.inject [] do |aliases, (key, _)|
|
|
||||||
aliases << key if plugin_host[key] == self
|
|
||||||
aliases
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
|
@ -1,77 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
|
|
||||||
# = WordList
|
|
||||||
#
|
|
||||||
# <b>A Hash subclass designed for mapping word lists to token types.</b>
|
|
||||||
#
|
|
||||||
# Copyright (c) 2006-2011 by murphy (Kornelius Kalnbach) <murphy rubychan de>
|
|
||||||
#
|
|
||||||
# License:: LGPL / ask the author
|
|
||||||
# Version:: 2.0 (2011-05-08)
|
|
||||||
#
|
|
||||||
# A WordList is a Hash with some additional features.
|
|
||||||
# It is intended to be used for keyword recognition.
|
|
||||||
#
|
|
||||||
# WordList is optimized to be used in Scanners,
|
|
||||||
# typically to decide whether a given ident is a special token.
|
|
||||||
#
|
|
||||||
# For case insensitive words use WordList::CaseIgnoring.
|
|
||||||
#
|
|
||||||
# Example:
|
|
||||||
#
|
|
||||||
# # define word arrays
|
|
||||||
# RESERVED_WORDS = %w[
|
|
||||||
# asm break case continue default do else
|
|
||||||
# ]
|
|
||||||
#
|
|
||||||
# PREDEFINED_TYPES = %w[
|
|
||||||
# int long short char void
|
|
||||||
# ]
|
|
||||||
#
|
|
||||||
# # make a WordList
|
|
||||||
# IDENT_KIND = WordList.new(:ident).
|
|
||||||
# add(RESERVED_WORDS, :reserved).
|
|
||||||
# add(PREDEFINED_TYPES, :predefined_type)
|
|
||||||
#
|
|
||||||
# ...
|
|
||||||
#
|
|
||||||
# def scan_tokens tokens, options
|
|
||||||
# ...
|
|
||||||
#
|
|
||||||
# elsif scan(/[A-Za-z_][A-Za-z_0-9]*/)
|
|
||||||
# # use it
|
|
||||||
# kind = IDENT_KIND[match]
|
|
||||||
# ...
|
|
||||||
class WordList < Hash
|
|
||||||
|
|
||||||
# Create a new WordList with +default+ as default value.
|
|
||||||
def initialize default = false
|
|
||||||
super default
|
|
||||||
end
|
|
||||||
|
|
||||||
# Add words to the list and associate them with +value+.
|
|
||||||
#
|
|
||||||
# Returns +self+, so you can concat add calls.
|
|
||||||
def add words, value = true
|
|
||||||
words.each { |word| self[word] = value }
|
|
||||||
self
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
|
|
||||||
# A CaseIgnoring WordList is like a WordList, only that
|
|
||||||
# keys are compared case-insensitively (normalizing keys using +downcase+).
|
|
||||||
class WordList::CaseIgnoring < WordList
|
|
||||||
|
|
||||||
def [] key
|
|
||||||
super key.downcase
|
|
||||||
end
|
|
||||||
|
|
||||||
def []= key, value
|
|
||||||
super key.downcase, value
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
|
@ -1,323 +0,0 @@
|
||||||
# encoding: utf-8
|
|
||||||
require 'strscan'
|
|
||||||
|
|
||||||
module CodeRay
|
|
||||||
|
|
||||||
autoload :WordList, 'coderay/helpers/word_list'
|
|
||||||
|
|
||||||
# = Scanners
|
|
||||||
#
|
|
||||||
# This module holds the Scanner class and its subclasses.
|
|
||||||
# For example, the Ruby scanner is named CodeRay::Scanners::Ruby
|
|
||||||
# can be found in coderay/scanners/ruby.
|
|
||||||
#
|
|
||||||
# Scanner also provides methods and constants for the register
|
|
||||||
# mechanism and the [] method that returns the Scanner class
|
|
||||||
# belonging to the given lang.
|
|
||||||
#
|
|
||||||
# See PluginHost.
|
|
||||||
module Scanners
|
|
||||||
extend PluginHost
|
|
||||||
plugin_path File.dirname(__FILE__), 'scanners'
|
|
||||||
|
|
||||||
|
|
||||||
# = Scanner
|
|
||||||
#
|
|
||||||
# The base class for all Scanners.
|
|
||||||
#
|
|
||||||
# It is a subclass of Ruby's great +StringScanner+, which
|
|
||||||
# makes it easy to access the scanning methods inside.
|
|
||||||
#
|
|
||||||
# It is also +Enumerable+, so you can use it like an Array of
|
|
||||||
# Tokens:
|
|
||||||
#
|
|
||||||
# require 'coderay'
|
|
||||||
#
|
|
||||||
# c_scanner = CodeRay::Scanners[:c].new "if (*p == '{') nest++;"
|
|
||||||
#
|
|
||||||
# for text, kind in c_scanner
|
|
||||||
# puts text if kind == :operator
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# # prints: (*==)++;
|
|
||||||
#
|
|
||||||
# OK, this is a very simple example :)
|
|
||||||
# You can also use +map+, +any?+, +find+ and even +sort_by+,
|
|
||||||
# if you want.
|
|
||||||
class Scanner < StringScanner
|
|
||||||
|
|
||||||
extend Plugin
|
|
||||||
plugin_host Scanners
|
|
||||||
|
|
||||||
# Raised if a Scanner fails while scanning
|
|
||||||
ScanError = Class.new StandardError
|
|
||||||
|
|
||||||
# The default options for all scanner classes.
|
|
||||||
#
|
|
||||||
# Define @default_options for subclasses.
|
|
||||||
DEFAULT_OPTIONS = { }
|
|
||||||
|
|
||||||
KINDS_NOT_LOC = [:comment, :doctype, :docstring]
|
|
||||||
|
|
||||||
attr_accessor :state
|
|
||||||
|
|
||||||
class << self
|
|
||||||
|
|
||||||
# Normalizes the given code into a string with UNIX newlines, in the
|
|
||||||
# scanner's internal encoding, with invalid and undefined charachters
|
|
||||||
# replaced by placeholders. Always returns a new object.
|
|
||||||
def normalize code
|
|
||||||
# original = code
|
|
||||||
code = code.to_s unless code.is_a? ::String
|
|
||||||
return code if code.empty?
|
|
||||||
|
|
||||||
if code.respond_to? :encoding
|
|
||||||
code = encode_with_encoding code, self.encoding
|
|
||||||
else
|
|
||||||
code = to_unix code
|
|
||||||
end
|
|
||||||
# code = code.dup if code.eql? original
|
|
||||||
code
|
|
||||||
end
|
|
||||||
|
|
||||||
# The typical filename suffix for this scanner's language.
|
|
||||||
def file_extension extension = lang
|
|
||||||
@file_extension ||= extension.to_s
|
|
||||||
end
|
|
||||||
|
|
||||||
# The encoding used internally by this scanner.
|
|
||||||
def encoding name = 'UTF-8'
|
|
||||||
@encoding ||= defined?(Encoding.find) && Encoding.find(name)
|
|
||||||
end
|
|
||||||
|
|
||||||
# The lang of this Scanner class, which is equal to its Plugin ID.
|
|
||||||
def lang
|
|
||||||
@plugin_id
|
|
||||||
end
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def encode_with_encoding code, target_encoding
|
|
||||||
if code.encoding == target_encoding
|
|
||||||
if code.valid_encoding?
|
|
||||||
return to_unix(code)
|
|
||||||
else
|
|
||||||
source_encoding = guess_encoding code
|
|
||||||
end
|
|
||||||
else
|
|
||||||
source_encoding = code.encoding
|
|
||||||
end
|
|
||||||
# print "encode_with_encoding from #{source_encoding} to #{target_encoding}"
|
|
||||||
code.encode target_encoding, source_encoding, :universal_newline => true, :undef => :replace, :invalid => :replace
|
|
||||||
end
|
|
||||||
|
|
||||||
def to_unix code
|
|
||||||
code.index(?\r) ? code.gsub(/\r\n?/, "\n") : code
|
|
||||||
end
|
|
||||||
|
|
||||||
def guess_encoding s
|
|
||||||
#:nocov:
|
|
||||||
IO.popen("file -b --mime -", "w+") do |file|
|
|
||||||
file.write s[0, 1024]
|
|
||||||
file.close_write
|
|
||||||
begin
|
|
||||||
Encoding.find file.gets[/charset=([-\w]+)/, 1]
|
|
||||||
rescue ArgumentError
|
|
||||||
Encoding::BINARY
|
|
||||||
end
|
|
||||||
end
|
|
||||||
#:nocov:
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
# Create a new Scanner.
|
|
||||||
#
|
|
||||||
# * +code+ is the input String and is handled by the superclass
|
|
||||||
# StringScanner.
|
|
||||||
# * +options+ is a Hash with Symbols as keys.
|
|
||||||
# It is merged with the default options of the class (you can
|
|
||||||
# overwrite default options here.)
|
|
||||||
#
|
|
||||||
# Else, a Tokens object is used.
|
|
||||||
def initialize code = '', options = {}
|
|
||||||
if self.class == Scanner
|
|
||||||
raise NotImplementedError, "I am only the basic Scanner class. I can't scan anything. :( Use my subclasses."
|
|
||||||
end
|
|
||||||
|
|
||||||
@options = self.class::DEFAULT_OPTIONS.merge options
|
|
||||||
|
|
||||||
super self.class.normalize(code)
|
|
||||||
|
|
||||||
@tokens = options[:tokens] || Tokens.new
|
|
||||||
@tokens.scanner = self if @tokens.respond_to? :scanner=
|
|
||||||
|
|
||||||
setup
|
|
||||||
end
|
|
||||||
|
|
||||||
# Sets back the scanner. Subclasses should redefine the reset_instance
|
|
||||||
# method instead of this one.
|
|
||||||
def reset
|
|
||||||
super
|
|
||||||
reset_instance
|
|
||||||
end
|
|
||||||
|
|
||||||
# Set a new string to be scanned.
|
|
||||||
def string= code
|
|
||||||
code = self.class.normalize(code)
|
|
||||||
super code
|
|
||||||
reset_instance
|
|
||||||
end
|
|
||||||
|
|
||||||
# the Plugin ID for this scanner
|
|
||||||
def lang
|
|
||||||
self.class.lang
|
|
||||||
end
|
|
||||||
|
|
||||||
# the default file extension for this scanner
|
|
||||||
def file_extension
|
|
||||||
self.class.file_extension
|
|
||||||
end
|
|
||||||
|
|
||||||
# Scan the code and returns all tokens in a Tokens object.
|
|
||||||
def tokenize source = nil, options = {}
|
|
||||||
options = @options.merge(options)
|
|
||||||
@tokens = options[:tokens] || @tokens || Tokens.new
|
|
||||||
@tokens.scanner = self if @tokens.respond_to? :scanner=
|
|
||||||
case source
|
|
||||||
when Array
|
|
||||||
self.string = self.class.normalize(source.join)
|
|
||||||
when nil
|
|
||||||
reset
|
|
||||||
else
|
|
||||||
self.string = self.class.normalize(source)
|
|
||||||
end
|
|
||||||
|
|
||||||
begin
|
|
||||||
scan_tokens @tokens, options
|
|
||||||
rescue => e
|
|
||||||
message = "Error in %s#scan_tokens, initial state was: %p" % [self.class, defined?(state) && state]
|
|
||||||
raise_inspect e.message, @tokens, message, 30, e.backtrace
|
|
||||||
end
|
|
||||||
|
|
||||||
@cached_tokens = @tokens
|
|
||||||
if source.is_a? Array
|
|
||||||
@tokens.split_into_parts(*source.map { |part| part.size })
|
|
||||||
else
|
|
||||||
@tokens
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Cache the result of tokenize.
|
|
||||||
def tokens
|
|
||||||
@cached_tokens ||= tokenize
|
|
||||||
end
|
|
||||||
|
|
||||||
# Traverse the tokens.
|
|
||||||
def each &block
|
|
||||||
tokens.each(&block)
|
|
||||||
end
|
|
||||||
include Enumerable
|
|
||||||
|
|
||||||
# The current line position of the scanner, starting with 1.
|
|
||||||
# See also: #column.
|
|
||||||
#
|
|
||||||
# Beware, this is implemented inefficiently. It should be used
|
|
||||||
# for debugging only.
|
|
||||||
def line pos = self.pos
|
|
||||||
return 1 if pos <= 0
|
|
||||||
binary_string[0...pos].count("\n") + 1
|
|
||||||
end
|
|
||||||
|
|
||||||
# The current column position of the scanner, starting with 1.
|
|
||||||
# See also: #line.
|
|
||||||
def column pos = self.pos
|
|
||||||
return 1 if pos <= 0
|
|
||||||
pos - (binary_string.rindex(?\n, pos - 1) || -1)
|
|
||||||
end
|
|
||||||
|
|
||||||
# The string in binary encoding.
|
|
||||||
#
|
|
||||||
# To be used with #pos, which is the index of the byte the scanner
|
|
||||||
# will scan next.
|
|
||||||
def binary_string
|
|
||||||
@binary_string ||=
|
|
||||||
if string.respond_to?(:bytesize) && string.bytesize != string.size
|
|
||||||
#:nocov:
|
|
||||||
string.dup.force_encoding('binary')
|
|
||||||
#:nocov:
|
|
||||||
else
|
|
||||||
string
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
# Can be implemented by subclasses to do some initialization
|
|
||||||
# that has to be done once per instance.
|
|
||||||
#
|
|
||||||
# Use reset for initialization that has to be done once per
|
|
||||||
# scan.
|
|
||||||
def setup # :doc:
|
|
||||||
end
|
|
||||||
|
|
||||||
# This is the central method, and commonly the only one a
|
|
||||||
# subclass implements.
|
|
||||||
#
|
|
||||||
# Subclasses must implement this method; it must return +tokens+
|
|
||||||
# and must only use Tokens#<< for storing scanned tokens!
|
|
||||||
def scan_tokens tokens, options # :doc:
|
|
||||||
raise NotImplementedError, "#{self.class}#scan_tokens not implemented."
|
|
||||||
end
|
|
||||||
|
|
||||||
# Resets the scanner.
|
|
||||||
def reset_instance
|
|
||||||
@tokens.clear if @tokens.respond_to?(:clear) && !@options[:keep_tokens]
|
|
||||||
@cached_tokens = nil
|
|
||||||
@binary_string = nil if defined? @binary_string
|
|
||||||
end
|
|
||||||
|
|
||||||
# Scanner error with additional status information
|
|
||||||
def raise_inspect msg, tokens, state = self.state || 'No state given!', ambit = 30, backtrace = caller
|
|
||||||
raise ScanError, <<-EOE % [
|
|
||||||
|
|
||||||
|
|
||||||
***ERROR in %s: %s (after %d tokens)
|
|
||||||
|
|
||||||
tokens:
|
|
||||||
%s
|
|
||||||
|
|
||||||
current line: %d column: %d pos: %d
|
|
||||||
matched: %p state: %p
|
|
||||||
bol? = %p, eos? = %p
|
|
||||||
|
|
||||||
surrounding code:
|
|
||||||
%p ~~ %p
|
|
||||||
|
|
||||||
|
|
||||||
***ERROR***
|
|
||||||
|
|
||||||
EOE
|
|
||||||
File.basename(caller[0]),
|
|
||||||
msg,
|
|
||||||
tokens.respond_to?(:size) ? tokens.size : 0,
|
|
||||||
tokens.respond_to?(:last) ? tokens.last(10).map { |t| t.inspect }.join("\n") : '',
|
|
||||||
line, column, pos,
|
|
||||||
matched, state, bol?, eos?,
|
|
||||||
binary_string[pos - ambit, ambit],
|
|
||||||
binary_string[pos, ambit],
|
|
||||||
], backtrace
|
|
||||||
end
|
|
||||||
|
|
||||||
# Shorthand for scan_until(/\z/).
|
|
||||||
# This method also avoids a JRuby 1.9 mode bug.
|
|
||||||
def scan_rest
|
|
||||||
rest = self.rest
|
|
||||||
terminate
|
|
||||||
rest
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,24 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
map \
|
|
||||||
:'c++' => :cpp,
|
|
||||||
:cplusplus => :cpp,
|
|
||||||
:ecmascript => :java_script,
|
|
||||||
:ecma_script => :java_script,
|
|
||||||
:rhtml => :erb,
|
|
||||||
:eruby => :erb,
|
|
||||||
:irb => :ruby,
|
|
||||||
:javascript => :java_script,
|
|
||||||
:js => :java_script,
|
|
||||||
:pascal => :delphi,
|
|
||||||
:patch => :diff,
|
|
||||||
:plain => :text,
|
|
||||||
:plaintext => :text,
|
|
||||||
:xhtml => :html,
|
|
||||||
:yml => :yaml
|
|
||||||
|
|
||||||
default :text
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,189 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
# Scanner for C.
|
|
||||||
class C < Scanner
|
|
||||||
|
|
||||||
register_for :c
|
|
||||||
file_extension 'c'
|
|
||||||
|
|
||||||
KEYWORDS = [
|
|
||||||
'asm', 'break', 'case', 'continue', 'default', 'do',
|
|
||||||
'else', 'enum', 'for', 'goto', 'if', 'return',
|
|
||||||
'sizeof', 'struct', 'switch', 'typedef', 'union', 'while',
|
|
||||||
'restrict', # added in C99
|
|
||||||
] # :nodoc:
|
|
||||||
|
|
||||||
PREDEFINED_TYPES = [
|
|
||||||
'int', 'long', 'short', 'char',
|
|
||||||
'signed', 'unsigned', 'float', 'double',
|
|
||||||
'bool', 'complex', # added in C99
|
|
||||||
] # :nodoc:
|
|
||||||
|
|
||||||
PREDEFINED_CONSTANTS = [
|
|
||||||
'EOF', 'NULL',
|
|
||||||
'true', 'false', # added in C99
|
|
||||||
] # :nodoc:
|
|
||||||
DIRECTIVES = [
|
|
||||||
'auto', 'extern', 'register', 'static', 'void',
|
|
||||||
'const', 'volatile', # added in C89
|
|
||||||
'inline', # added in C99
|
|
||||||
] # :nodoc:
|
|
||||||
|
|
||||||
IDENT_KIND = WordList.new(:ident).
|
|
||||||
add(KEYWORDS, :keyword).
|
|
||||||
add(PREDEFINED_TYPES, :predefined_type).
|
|
||||||
add(DIRECTIVES, :directive).
|
|
||||||
add(PREDEFINED_CONSTANTS, :predefined_constant) # :nodoc:
|
|
||||||
|
|
||||||
ESCAPE = / [rbfntv\n\\'"] | x[a-fA-F0-9]{1,2} | [0-7]{1,3} /x # :nodoc:
|
|
||||||
UNICODE_ESCAPE = / u[a-fA-F0-9]{4} | U[a-fA-F0-9]{8} /x # :nodoc:
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def scan_tokens encoder, options
|
|
||||||
|
|
||||||
state = :initial
|
|
||||||
label_expected = true
|
|
||||||
case_expected = false
|
|
||||||
label_expected_before_preproc_line = nil
|
|
||||||
in_preproc_line = false
|
|
||||||
|
|
||||||
until eos?
|
|
||||||
|
|
||||||
case state
|
|
||||||
|
|
||||||
when :initial
|
|
||||||
|
|
||||||
if match = scan(/ \s+ | \\\n /x)
|
|
||||||
if in_preproc_line && match != "\\\n" && match.index(?\n)
|
|
||||||
in_preproc_line = false
|
|
||||||
label_expected = label_expected_before_preproc_line
|
|
||||||
end
|
|
||||||
encoder.text_token match, :space
|
|
||||||
|
|
||||||
elsif match = scan(%r! // [^\n\\]* (?: \\. [^\n\\]* )* | /\* (?: .*? \*/ | .* ) !mx)
|
|
||||||
encoder.text_token match, :comment
|
|
||||||
|
|
||||||
elsif match = scan(/ [-+*=<>?:;,!&^|()\[\]{}~%]+ | \/=? | \.(?!\d) /x)
|
|
||||||
label_expected = match =~ /[;\{\}]/
|
|
||||||
if case_expected
|
|
||||||
label_expected = true if match == ':'
|
|
||||||
case_expected = false
|
|
||||||
end
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
|
|
||||||
elsif match = scan(/ [A-Za-z_][A-Za-z_0-9]* /x)
|
|
||||||
kind = IDENT_KIND[match]
|
|
||||||
if kind == :ident && label_expected && !in_preproc_line && scan(/:(?!:)/)
|
|
||||||
kind = :label
|
|
||||||
match << matched
|
|
||||||
else
|
|
||||||
label_expected = false
|
|
||||||
if kind == :keyword
|
|
||||||
case match
|
|
||||||
when 'case', 'default'
|
|
||||||
case_expected = true
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
encoder.text_token match, kind
|
|
||||||
|
|
||||||
elsif match = scan(/L?"/)
|
|
||||||
encoder.begin_group :string
|
|
||||||
if match[0] == ?L
|
|
||||||
encoder.text_token 'L', :modifier
|
|
||||||
match = '"'
|
|
||||||
end
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
state = :string
|
|
||||||
|
|
||||||
elsif match = scan(/ \# \s* if \s* 0 /x)
|
|
||||||
match << scan_until(/ ^\# (?:elif|else|endif) .*? $ | \z /xm) unless eos?
|
|
||||||
encoder.text_token match, :comment
|
|
||||||
|
|
||||||
elsif match = scan(/#[ \t]*(\w*)/)
|
|
||||||
encoder.text_token match, :preprocessor
|
|
||||||
in_preproc_line = true
|
|
||||||
label_expected_before_preproc_line = label_expected
|
|
||||||
state = :include_expected if self[1] == 'include'
|
|
||||||
|
|
||||||
elsif match = scan(/ L?' (?: [^\'\n\\] | \\ #{ESCAPE} )? '? /ox)
|
|
||||||
label_expected = false
|
|
||||||
encoder.text_token match, :char
|
|
||||||
|
|
||||||
elsif match = scan(/\$/)
|
|
||||||
encoder.text_token match, :ident
|
|
||||||
|
|
||||||
elsif match = scan(/0[xX][0-9A-Fa-f]+/)
|
|
||||||
label_expected = false
|
|
||||||
encoder.text_token match, :hex
|
|
||||||
|
|
||||||
elsif match = scan(/(?:0[0-7]+)(?![89.eEfF])/)
|
|
||||||
label_expected = false
|
|
||||||
encoder.text_token match, :octal
|
|
||||||
|
|
||||||
elsif match = scan(/(?:\d+)(?![.eEfF])L?L?/)
|
|
||||||
label_expected = false
|
|
||||||
encoder.text_token match, :integer
|
|
||||||
|
|
||||||
elsif match = scan(/\d[fF]?|\d*\.\d+(?:[eE][+-]?\d+)?[fF]?|\d+[eE][+-]?\d+[fF]?/)
|
|
||||||
label_expected = false
|
|
||||||
encoder.text_token match, :float
|
|
||||||
|
|
||||||
else
|
|
||||||
encoder.text_token getch, :error
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
when :string
|
|
||||||
if match = scan(/[^\\\n"]+/)
|
|
||||||
encoder.text_token match, :content
|
|
||||||
elsif match = scan(/"/)
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
encoder.end_group :string
|
|
||||||
state = :initial
|
|
||||||
label_expected = false
|
|
||||||
elsif match = scan(/ \\ (?: #{ESCAPE} | #{UNICODE_ESCAPE} ) /mox)
|
|
||||||
encoder.text_token match, :char
|
|
||||||
elsif match = scan(/ \\ | $ /x)
|
|
||||||
encoder.end_group :string
|
|
||||||
encoder.text_token match, :error
|
|
||||||
state = :initial
|
|
||||||
label_expected = false
|
|
||||||
else
|
|
||||||
raise_inspect "else case \" reached; %p not handled." % peek(1), encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
when :include_expected
|
|
||||||
if match = scan(/<[^>\n]+>?|"[^"\n\\]*(?:\\.[^"\n\\]*)*"?/)
|
|
||||||
encoder.text_token match, :include
|
|
||||||
state = :initial
|
|
||||||
|
|
||||||
elsif match = scan(/\s+/)
|
|
||||||
encoder.text_token match, :space
|
|
||||||
state = :initial if match.index ?\n
|
|
||||||
|
|
||||||
else
|
|
||||||
state = :initial
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
else
|
|
||||||
raise_inspect 'Unknown state', encoder
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
if state == :string
|
|
||||||
encoder.end_group :string
|
|
||||||
end
|
|
||||||
|
|
||||||
encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,217 +0,0 @@
|
||||||
# encoding: utf-8
|
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
# Clojure scanner by Licenser.
|
|
||||||
class Clojure < Scanner
|
|
||||||
|
|
||||||
register_for :clojure
|
|
||||||
file_extension 'clj'
|
|
||||||
|
|
||||||
SPECIAL_FORMS = %w[
|
|
||||||
def if do let quote var fn loop recur throw try catch monitor-enter monitor-exit .
|
|
||||||
new
|
|
||||||
] # :nodoc:
|
|
||||||
|
|
||||||
CORE_FORMS = %w[
|
|
||||||
+ - -> ->> .. / * <= < = == >= > accessor aclone add-classpath add-watch
|
|
||||||
agent agent-error agent-errors aget alength alias all-ns alter alter-meta!
|
|
||||||
alter-var-root amap ancestors and apply areduce array-map aset aset-boolean
|
|
||||||
aset-byte aset-char aset-double aset-float aset-int aset-long aset-short
|
|
||||||
assert assoc assoc! assoc-in associative? atom await await-for bases bean
|
|
||||||
bigdec bigint binding bit-and bit-and-not bit-clear bit-flip bit-not bit-or
|
|
||||||
bit-set bit-shift-left bit-shift-right bit-test bit-xor boolean boolean-array
|
|
||||||
booleans bound-fn bound-fn* bound? butlast byte byte-array bytes case cast char
|
|
||||||
char-array char-escape-string char-name-string char? chars class class?
|
|
||||||
clear-agent-errors clojure-version coll? comment commute comp comparator
|
|
||||||
compare compare-and-set! compile complement concat cond condp conj conj!
|
|
||||||
cons constantly construct-proxy contains? count counted? create-ns
|
|
||||||
create-struct cycle dec decimal? declare definline defmacro defmethod defmulti
|
|
||||||
defn defn- defonce defprotocol defrecord defstruct deftype delay delay?
|
|
||||||
deliver denominator deref derive descendants disj disj! dissoc dissoc!
|
|
||||||
distinct distinct? doall doc dorun doseq dosync dotimes doto double
|
|
||||||
double-array doubles drop drop-last drop-while empty empty? ensure
|
|
||||||
enumeration-seq error-handler error-mode eval even? every? extend
|
|
||||||
extend-protocol extend-type extenders extends? false? ffirst file-seq
|
|
||||||
filter find find-doc find-ns find-var first float float-array float?
|
|
||||||
floats flush fn fn? fnext for force format future future-call future-cancel
|
|
||||||
future-cancelled? future-done? future? gen-class gen-interface gensym get
|
|
||||||
get-in get-method get-proxy-class get-thread-bindings get-validator hash
|
|
||||||
hash-map hash-set identical? identity if-let if-not ifn? import in-ns
|
|
||||||
inc init-proxy instance? int int-array integer? interleave intern
|
|
||||||
interpose into into-array ints io! isa? iterate iterator-seq juxt key
|
|
||||||
keys keyword keyword? last lazy-cat lazy-seq let letfn line-seq list list*
|
|
||||||
list? load load-file load-reader load-string loaded-libs locking long
|
|
||||||
long-array longs loop macroexpand macroexpand-1 make-array make-hierarchy
|
|
||||||
map map? mapcat max max-key memfn memoize merge merge-with meta methods
|
|
||||||
min min-key mod name namespace neg? newline next nfirst nil? nnext not
|
|
||||||
not-any? not-empty not-every? not= ns ns-aliases ns-imports ns-interns
|
|
||||||
ns-map ns-name ns-publics ns-refers ns-resolve ns-unalias ns-unmap nth
|
|
||||||
nthnext num number? numerator object-array odd? or parents partial
|
|
||||||
partition pcalls peek persistent! pmap pop pop! pop-thread-bindings
|
|
||||||
pos? pr pr-str prefer-method prefers print print-namespace-doc
|
|
||||||
print-str printf println println-str prn prn-str promise proxy
|
|
||||||
proxy-mappings proxy-super push-thread-bindings pvalues quot rand
|
|
||||||
rand-int range ratio? rationalize re-find re-groups re-matcher
|
|
||||||
re-matches re-pattern re-seq read read-line read-string reduce ref
|
|
||||||
ref-history-count ref-max-history ref-min-history ref-set refer
|
|
||||||
refer-clojure reify release-pending-sends rem remove remove-all-methods
|
|
||||||
remove-method remove-ns remove-watch repeat repeatedly replace replicate
|
|
||||||
require reset! reset-meta! resolve rest restart-agent resultset-seq
|
|
||||||
reverse reversible? rseq rsubseq satisfies? second select-keys send
|
|
||||||
send-off seq seq? seque sequence sequential? set set-error-handler!
|
|
||||||
set-error-mode! set-validator! set? short short-array shorts
|
|
||||||
shutdown-agents slurp some sort sort-by sorted-map sorted-map-by
|
|
||||||
sorted-set sorted-set-by sorted? special-form-anchor special-symbol?
|
|
||||||
split-at split-with str string? struct struct-map subs subseq subvec
|
|
||||||
supers swap! symbol symbol? sync syntax-symbol-anchor take take-last
|
|
||||||
take-nth take-while test the-ns thread-bound? time to-array to-array-2d
|
|
||||||
trampoline transient tree-seq true? type unchecked-add unchecked-dec
|
|
||||||
unchecked-divide unchecked-inc unchecked-multiply unchecked-negate
|
|
||||||
unchecked-remainder unchecked-subtract underive update-in update-proxy
|
|
||||||
use val vals var-get var-set var? vary-meta vec vector vector-of vector?
|
|
||||||
when when-first when-let when-not while with-bindings with-bindings*
|
|
||||||
with-in-str with-local-vars with-meta with-open with-out-str
|
|
||||||
with-precision xml-seq zero? zipmap
|
|
||||||
] # :nodoc:
|
|
||||||
|
|
||||||
PREDEFINED_CONSTANTS = %w[
|
|
||||||
true false nil *1 *2 *3 *agent* *clojure-version* *command-line-args*
|
|
||||||
*compile-files* *compile-path* *e *err* *file* *flush-on-newline*
|
|
||||||
*in* *ns* *out* *print-dup* *print-length* *print-level* *print-meta*
|
|
||||||
*print-readably* *read-eval* *warn-on-reflection*
|
|
||||||
] # :nodoc:
|
|
||||||
|
|
||||||
IDENT_KIND = WordList.new(:ident).
|
|
||||||
add(SPECIAL_FORMS, :keyword).
|
|
||||||
add(CORE_FORMS, :keyword).
|
|
||||||
add(PREDEFINED_CONSTANTS, :predefined_constant)
|
|
||||||
|
|
||||||
KEYWORD_NEXT_TOKEN_KIND = WordList.new(nil).
|
|
||||||
add(%w[ def defn defn- definline defmacro defmulti defmethod defstruct defonce declare ], :function).
|
|
||||||
add(%w[ ns ], :namespace).
|
|
||||||
add(%w[ defprotocol defrecord ], :class)
|
|
||||||
|
|
||||||
BASIC_IDENTIFIER = /[a-zA-Z$%*\/_+!?&<>\-=]=?[a-zA-Z0-9$&*+!\/_?<>\-\#]*/
|
|
||||||
IDENTIFIER = /(?!-\d)(?:(?:#{BASIC_IDENTIFIER}\.)*#{BASIC_IDENTIFIER}(?:\/#{BASIC_IDENTIFIER})?\.?)|\.\.?/
|
|
||||||
SYMBOL = /::?#{IDENTIFIER}/o
|
|
||||||
DIGIT = /\d/
|
|
||||||
DIGIT10 = DIGIT
|
|
||||||
DIGIT16 = /[0-9a-f]/i
|
|
||||||
DIGIT8 = /[0-7]/
|
|
||||||
DIGIT2 = /[01]/
|
|
||||||
RADIX16 = /\#x/i
|
|
||||||
RADIX8 = /\#o/i
|
|
||||||
RADIX2 = /\#b/i
|
|
||||||
RADIX10 = /\#d/i
|
|
||||||
EXACTNESS = /#i|#e/i
|
|
||||||
SIGN = /[\+-]?/
|
|
||||||
EXP_MARK = /[esfdl]/i
|
|
||||||
EXP = /#{EXP_MARK}#{SIGN}#{DIGIT}+/
|
|
||||||
SUFFIX = /#{EXP}?/
|
|
||||||
PREFIX10 = /#{RADIX10}?#{EXACTNESS}?|#{EXACTNESS}?#{RADIX10}?/
|
|
||||||
PREFIX16 = /#{RADIX16}#{EXACTNESS}?|#{EXACTNESS}?#{RADIX16}/
|
|
||||||
PREFIX8 = /#{RADIX8}#{EXACTNESS}?|#{EXACTNESS}?#{RADIX8}/
|
|
||||||
PREFIX2 = /#{RADIX2}#{EXACTNESS}?|#{EXACTNESS}?#{RADIX2}/
|
|
||||||
UINT10 = /#{DIGIT10}+#*/
|
|
||||||
UINT16 = /#{DIGIT16}+#*/
|
|
||||||
UINT8 = /#{DIGIT8}+#*/
|
|
||||||
UINT2 = /#{DIGIT2}+#*/
|
|
||||||
DECIMAL = /#{DIGIT10}+#+\.#*#{SUFFIX}|#{DIGIT10}+\.#{DIGIT10}*#*#{SUFFIX}|\.#{DIGIT10}+#*#{SUFFIX}|#{UINT10}#{EXP}/
|
|
||||||
UREAL10 = /#{UINT10}\/#{UINT10}|#{DECIMAL}|#{UINT10}/
|
|
||||||
UREAL16 = /#{UINT16}\/#{UINT16}|#{UINT16}/
|
|
||||||
UREAL8 = /#{UINT8}\/#{UINT8}|#{UINT8}/
|
|
||||||
UREAL2 = /#{UINT2}\/#{UINT2}|#{UINT2}/
|
|
||||||
REAL10 = /#{SIGN}#{UREAL10}/
|
|
||||||
REAL16 = /#{SIGN}#{UREAL16}/
|
|
||||||
REAL8 = /#{SIGN}#{UREAL8}/
|
|
||||||
REAL2 = /#{SIGN}#{UREAL2}/
|
|
||||||
IMAG10 = /i|#{UREAL10}i/
|
|
||||||
IMAG16 = /i|#{UREAL16}i/
|
|
||||||
IMAG8 = /i|#{UREAL8}i/
|
|
||||||
IMAG2 = /i|#{UREAL2}i/
|
|
||||||
COMPLEX10 = /#{REAL10}@#{REAL10}|#{REAL10}\+#{IMAG10}|#{REAL10}-#{IMAG10}|\+#{IMAG10}|-#{IMAG10}|#{REAL10}/
|
|
||||||
COMPLEX16 = /#{REAL16}@#{REAL16}|#{REAL16}\+#{IMAG16}|#{REAL16}-#{IMAG16}|\+#{IMAG16}|-#{IMAG16}|#{REAL16}/
|
|
||||||
COMPLEX8 = /#{REAL8}@#{REAL8}|#{REAL8}\+#{IMAG8}|#{REAL8}-#{IMAG8}|\+#{IMAG8}|-#{IMAG8}|#{REAL8}/
|
|
||||||
COMPLEX2 = /#{REAL2}@#{REAL2}|#{REAL2}\+#{IMAG2}|#{REAL2}-#{IMAG2}|\+#{IMAG2}|-#{IMAG2}|#{REAL2}/
|
|
||||||
NUM10 = /#{PREFIX10}?#{COMPLEX10}/
|
|
||||||
NUM16 = /#{PREFIX16}#{COMPLEX16}/
|
|
||||||
NUM8 = /#{PREFIX8}#{COMPLEX8}/
|
|
||||||
NUM2 = /#{PREFIX2}#{COMPLEX2}/
|
|
||||||
NUM = /#{NUM10}|#{NUM16}|#{NUM8}|#{NUM2}/
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def scan_tokens encoder, options
|
|
||||||
|
|
||||||
state = :initial
|
|
||||||
kind = nil
|
|
||||||
|
|
||||||
until eos?
|
|
||||||
|
|
||||||
case state
|
|
||||||
when :initial
|
|
||||||
if match = scan(/ \s+ | \\\n | , /x)
|
|
||||||
encoder.text_token match, :space
|
|
||||||
elsif match = scan(/['`\(\[\)\]\{\}]|\#[({]|~@?|[@\^]/)
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
elsif match = scan(/;.*/)
|
|
||||||
encoder.text_token match, :comment # TODO: recognize (comment ...) too
|
|
||||||
elsif match = scan(/\#?\\(?:newline|space|.?)/)
|
|
||||||
encoder.text_token match, :char
|
|
||||||
elsif match = scan(/\#[ft]/)
|
|
||||||
encoder.text_token match, :predefined_constant
|
|
||||||
elsif match = scan(/#{IDENTIFIER}/o)
|
|
||||||
kind = IDENT_KIND[match]
|
|
||||||
encoder.text_token match, kind
|
|
||||||
if rest? && kind == :keyword
|
|
||||||
if kind = KEYWORD_NEXT_TOKEN_KIND[match]
|
|
||||||
encoder.text_token match, :space if match = scan(/\s+/o)
|
|
||||||
encoder.text_token match, kind if match = scan(/#{IDENTIFIER}/o)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
elsif match = scan(/#{SYMBOL}/o)
|
|
||||||
encoder.text_token match, :symbol
|
|
||||||
elsif match = scan(/\./)
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
elsif match = scan(/ \# \^ #{IDENTIFIER} /ox)
|
|
||||||
encoder.text_token match, :type
|
|
||||||
elsif match = scan(/ (\#)? " /x)
|
|
||||||
state = self[1] ? :regexp : :string
|
|
||||||
encoder.begin_group state
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
elsif match = scan(/#{NUM}/o) and not matched.empty?
|
|
||||||
encoder.text_token match, match[/[.e\/]/i] ? :float : :integer
|
|
||||||
else
|
|
||||||
encoder.text_token getch, :error
|
|
||||||
end
|
|
||||||
|
|
||||||
when :string, :regexp
|
|
||||||
if match = scan(/[^"\\]+|\\.?/)
|
|
||||||
encoder.text_token match, :content
|
|
||||||
elsif match = scan(/"/)
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
encoder.end_group state
|
|
||||||
state = :initial
|
|
||||||
else
|
|
||||||
raise_inspect "else case \" reached; %p not handled." % peek(1),
|
|
||||||
encoder, state
|
|
||||||
end
|
|
||||||
|
|
||||||
else
|
|
||||||
raise 'else case reached'
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
if [:string, :regexp].include? state
|
|
||||||
encoder.end_group state
|
|
||||||
end
|
|
||||||
|
|
||||||
encoder
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,215 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
# Scanner for C++.
|
|
||||||
#
|
|
||||||
# Aliases: +cplusplus+, c++
|
|
||||||
class CPlusPlus < Scanner
|
|
||||||
|
|
||||||
register_for :cpp
|
|
||||||
file_extension 'cpp'
|
|
||||||
title 'C++'
|
|
||||||
|
|
||||||
#-- http://www.cppreference.com/wiki/keywords/start
|
|
||||||
KEYWORDS = [
|
|
||||||
'and', 'and_eq', 'asm', 'bitand', 'bitor', 'break',
|
|
||||||
'case', 'catch', 'class', 'compl', 'const_cast',
|
|
||||||
'continue', 'default', 'delete', 'do', 'dynamic_cast', 'else',
|
|
||||||
'enum', 'export', 'for', 'goto', 'if', 'namespace', 'new',
|
|
||||||
'not', 'not_eq', 'or', 'or_eq', 'reinterpret_cast', 'return',
|
|
||||||
'sizeof', 'static_cast', 'struct', 'switch', 'template',
|
|
||||||
'throw', 'try', 'typedef', 'typeid', 'typename', 'union',
|
|
||||||
'while', 'xor', 'xor_eq',
|
|
||||||
] # :nodoc:
|
|
||||||
|
|
||||||
PREDEFINED_TYPES = [
|
|
||||||
'bool', 'char', 'double', 'float', 'int', 'long',
|
|
||||||
'short', 'signed', 'unsigned', 'wchar_t', 'string',
|
|
||||||
] # :nodoc:
|
|
||||||
PREDEFINED_CONSTANTS = [
|
|
||||||
'false', 'true',
|
|
||||||
'EOF', 'NULL',
|
|
||||||
] # :nodoc:
|
|
||||||
PREDEFINED_VARIABLES = [
|
|
||||||
'this',
|
|
||||||
] # :nodoc:
|
|
||||||
DIRECTIVES = [
|
|
||||||
'auto', 'const', 'explicit', 'extern', 'friend', 'inline', 'mutable', 'operator',
|
|
||||||
'private', 'protected', 'public', 'register', 'static', 'using', 'virtual', 'void',
|
|
||||||
'volatile',
|
|
||||||
] # :nodoc:
|
|
||||||
|
|
||||||
IDENT_KIND = WordList.new(:ident).
|
|
||||||
add(KEYWORDS, :keyword).
|
|
||||||
add(PREDEFINED_TYPES, :predefined_type).
|
|
||||||
add(PREDEFINED_VARIABLES, :local_variable).
|
|
||||||
add(DIRECTIVES, :directive).
|
|
||||||
add(PREDEFINED_CONSTANTS, :predefined_constant) # :nodoc:
|
|
||||||
|
|
||||||
ESCAPE = / [rbfntv\n\\'"] | x[a-fA-F0-9]{1,2} | [0-7]{1,3} /x # :nodoc:
|
|
||||||
UNICODE_ESCAPE = / u[a-fA-F0-9]{4} | U[a-fA-F0-9]{8} /x # :nodoc:
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def scan_tokens encoder, options
|
|
||||||
|
|
||||||
state = :initial
|
|
||||||
label_expected = true
|
|
||||||
case_expected = false
|
|
||||||
label_expected_before_preproc_line = nil
|
|
||||||
in_preproc_line = false
|
|
||||||
|
|
||||||
until eos?
|
|
||||||
|
|
||||||
case state
|
|
||||||
|
|
||||||
when :initial
|
|
||||||
|
|
||||||
if match = scan(/ \s+ | \\\n /x)
|
|
||||||
if in_preproc_line && match != "\\\n" && match.index(?\n)
|
|
||||||
in_preproc_line = false
|
|
||||||
label_expected = label_expected_before_preproc_line
|
|
||||||
end
|
|
||||||
encoder.text_token match, :space
|
|
||||||
|
|
||||||
elsif match = scan(%r! // [^\n\\]* (?: \\. [^\n\\]* )* | /\* (?: .*? \*/ | .* ) !mx)
|
|
||||||
encoder.text_token match, :comment
|
|
||||||
|
|
||||||
elsif match = scan(/ \# \s* if \s* 0 /x)
|
|
||||||
match << scan_until(/ ^\# (?:elif|else|endif) .*? $ | \z /xm) unless eos?
|
|
||||||
encoder.text_token match, :comment
|
|
||||||
|
|
||||||
elsif match = scan(/ [-+*=<>?:;,!&^|()\[\]{}~%]+ | \/=? | \.(?!\d) /x)
|
|
||||||
label_expected = match =~ /[;\{\}]/
|
|
||||||
if case_expected
|
|
||||||
label_expected = true if match == ':'
|
|
||||||
case_expected = false
|
|
||||||
end
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
|
|
||||||
elsif match = scan(/ [A-Za-z_][A-Za-z_0-9]* /x)
|
|
||||||
kind = IDENT_KIND[match]
|
|
||||||
if kind == :ident && label_expected && !in_preproc_line && scan(/:(?!:)/)
|
|
||||||
kind = :label
|
|
||||||
match << matched
|
|
||||||
else
|
|
||||||
label_expected = false
|
|
||||||
if kind == :keyword
|
|
||||||
case match
|
|
||||||
when 'class'
|
|
||||||
state = :class_name_expected
|
|
||||||
when 'case', 'default'
|
|
||||||
case_expected = true
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
encoder.text_token match, kind
|
|
||||||
|
|
||||||
elsif match = scan(/\$/)
|
|
||||||
encoder.text_token match, :ident
|
|
||||||
|
|
||||||
elsif match = scan(/L?"/)
|
|
||||||
encoder.begin_group :string
|
|
||||||
if match[0] == ?L
|
|
||||||
encoder.text_token match, 'L', :modifier
|
|
||||||
match = '"'
|
|
||||||
end
|
|
||||||
state = :string
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
|
|
||||||
elsif match = scan(/#[ \t]*(\w*)/)
|
|
||||||
encoder.text_token match, :preprocessor
|
|
||||||
in_preproc_line = true
|
|
||||||
label_expected_before_preproc_line = label_expected
|
|
||||||
state = :include_expected if self[1] == 'include'
|
|
||||||
|
|
||||||
elsif match = scan(/ L?' (?: [^\'\n\\] | \\ #{ESCAPE} )? '? /ox)
|
|
||||||
label_expected = false
|
|
||||||
encoder.text_token match, :char
|
|
||||||
|
|
||||||
elsif match = scan(/0[xX][0-9A-Fa-f]+/)
|
|
||||||
label_expected = false
|
|
||||||
encoder.text_token match, :hex
|
|
||||||
|
|
||||||
elsif match = scan(/(?:0[0-7]+)(?![89.eEfF])/)
|
|
||||||
label_expected = false
|
|
||||||
encoder.text_token match, :octal
|
|
||||||
|
|
||||||
elsif match = scan(/(?:\d+)(?![.eEfF])L?L?/)
|
|
||||||
label_expected = false
|
|
||||||
encoder.text_token match, :integer
|
|
||||||
|
|
||||||
elsif match = scan(/\d[fF]?|\d*\.\d+(?:[eE][+-]?\d+)?[fF]?|\d+[eE][+-]?\d+[fF]?/)
|
|
||||||
label_expected = false
|
|
||||||
encoder.text_token match, :float
|
|
||||||
|
|
||||||
else
|
|
||||||
encoder.text_token getch, :error
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
when :string
|
|
||||||
if match = scan(/[^\\"]+/)
|
|
||||||
encoder.text_token match, :content
|
|
||||||
elsif match = scan(/"/)
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
encoder.end_group :string
|
|
||||||
state = :initial
|
|
||||||
label_expected = false
|
|
||||||
elsif match = scan(/ \\ (?: #{ESCAPE} | #{UNICODE_ESCAPE} ) /mox)
|
|
||||||
encoder.text_token match, :char
|
|
||||||
elsif match = scan(/ \\ | $ /x)
|
|
||||||
encoder.end_group :string
|
|
||||||
encoder.text_token match, :error
|
|
||||||
state = :initial
|
|
||||||
label_expected = false
|
|
||||||
else
|
|
||||||
raise_inspect "else case \" reached; %p not handled." % peek(1), encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
when :include_expected
|
|
||||||
if match = scan(/<[^>\n]+>?|"[^"\n\\]*(?:\\.[^"\n\\]*)*"?/)
|
|
||||||
encoder.text_token match, :include
|
|
||||||
state = :initial
|
|
||||||
|
|
||||||
elsif match = scan(/\s+/)
|
|
||||||
encoder.text_token match, :space
|
|
||||||
state = :initial if match.index ?\n
|
|
||||||
|
|
||||||
else
|
|
||||||
state = :initial
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
when :class_name_expected
|
|
||||||
if match = scan(/ [A-Za-z_][A-Za-z_0-9]* /x)
|
|
||||||
encoder.text_token match, :class
|
|
||||||
state = :initial
|
|
||||||
|
|
||||||
elsif match = scan(/\s+/)
|
|
||||||
encoder.text_token match, :space
|
|
||||||
|
|
||||||
else
|
|
||||||
encoder.text_token getch, :error
|
|
||||||
state = :initial
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
else
|
|
||||||
raise_inspect 'Unknown state', encoder
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
if state == :string
|
|
||||||
encoder.end_group :string
|
|
||||||
end
|
|
||||||
|
|
||||||
encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,192 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
class CSS < Scanner
|
|
||||||
|
|
||||||
register_for :css
|
|
||||||
|
|
||||||
KINDS_NOT_LOC = [
|
|
||||||
:comment,
|
|
||||||
:class, :pseudo_class, :type,
|
|
||||||
:constant, :directive,
|
|
||||||
:key, :value, :operator, :color, :float, :string,
|
|
||||||
:error, :important,
|
|
||||||
] # :nodoc:
|
|
||||||
|
|
||||||
module RE # :nodoc:
|
|
||||||
Hex = /[0-9a-fA-F]/
|
|
||||||
Unicode = /\\#{Hex}{1,6}(?:\r\n|\s)?/ # differs from standard because it allows uppercase hex too
|
|
||||||
Escape = /#{Unicode}|\\[^\r\n\f0-9a-fA-F]/
|
|
||||||
NMChar = /[-_a-zA-Z0-9]|#{Escape}/
|
|
||||||
NMStart = /[_a-zA-Z]|#{Escape}/
|
|
||||||
NL = /\r\n|\r|\n|\f/
|
|
||||||
String1 = /"(?:[^\n\r\f\\"]|\\#{NL}|#{Escape})*"?/ # TODO: buggy regexp
|
|
||||||
String2 = /'(?:[^\n\r\f\\']|\\#{NL}|#{Escape})*'?/ # TODO: buggy regexp
|
|
||||||
String = /#{String1}|#{String2}/
|
|
||||||
|
|
||||||
HexColor = /#(?:#{Hex}{6}|#{Hex}{3})/
|
|
||||||
Color = /#{HexColor}/
|
|
||||||
|
|
||||||
Num = /-?(?:[0-9]+|[0-9]*\.[0-9]+)/
|
|
||||||
Name = /#{NMChar}+/
|
|
||||||
Ident = /-?#{NMStart}#{NMChar}*/
|
|
||||||
AtKeyword = /@#{Ident}/
|
|
||||||
Percentage = /#{Num}%/
|
|
||||||
|
|
||||||
reldimensions = %w[em ex px]
|
|
||||||
absdimensions = %w[in cm mm pt pc]
|
|
||||||
Unit = Regexp.union(*(reldimensions + absdimensions))
|
|
||||||
|
|
||||||
Dimension = /#{Num}#{Unit}/
|
|
||||||
|
|
||||||
Comment = %r! /\* (?: .*? \*/ | .* ) !mx
|
|
||||||
Function = /(?:url|alpha|attr|counters?)\((?:[^)\n\r\f]|\\\))*\)?/
|
|
||||||
|
|
||||||
Id = /##{Name}/
|
|
||||||
Class = /\.#{Name}/
|
|
||||||
PseudoClass = /:#{Name}/
|
|
||||||
AttributeSelector = /\[[^\]]*\]?/
|
|
||||||
end
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def scan_tokens encoder, options
|
|
||||||
|
|
||||||
value_expected = nil
|
|
||||||
states = [:initial]
|
|
||||||
|
|
||||||
until eos?
|
|
||||||
|
|
||||||
if match = scan(/\s+/)
|
|
||||||
encoder.text_token match, :space
|
|
||||||
|
|
||||||
elsif case states.last
|
|
||||||
when :initial, :media
|
|
||||||
if match = scan(/(?>#{RE::Ident})(?!\()|\*/ox)
|
|
||||||
encoder.text_token match, :type
|
|
||||||
next
|
|
||||||
elsif match = scan(RE::Class)
|
|
||||||
encoder.text_token match, :class
|
|
||||||
next
|
|
||||||
elsif match = scan(RE::Id)
|
|
||||||
encoder.text_token match, :constant
|
|
||||||
next
|
|
||||||
elsif match = scan(RE::PseudoClass)
|
|
||||||
encoder.text_token match, :pseudo_class
|
|
||||||
next
|
|
||||||
elsif match = scan(RE::AttributeSelector)
|
|
||||||
# TODO: Improve highlighting inside of attribute selectors.
|
|
||||||
encoder.text_token match[0,1], :operator
|
|
||||||
encoder.text_token match[1..-2], :attribute_name if match.size > 2
|
|
||||||
encoder.text_token match[-1,1], :operator if match[-1] == ?]
|
|
||||||
next
|
|
||||||
elsif match = scan(/@media/)
|
|
||||||
encoder.text_token match, :directive
|
|
||||||
states.push :media_before_name
|
|
||||||
next
|
|
||||||
end
|
|
||||||
|
|
||||||
when :block
|
|
||||||
if match = scan(/(?>#{RE::Ident})(?!\()/ox)
|
|
||||||
if value_expected
|
|
||||||
encoder.text_token match, :value
|
|
||||||
else
|
|
||||||
encoder.text_token match, :key
|
|
||||||
end
|
|
||||||
next
|
|
||||||
end
|
|
||||||
|
|
||||||
when :media_before_name
|
|
||||||
if match = scan(RE::Ident)
|
|
||||||
encoder.text_token match, :type
|
|
||||||
states[-1] = :media_after_name
|
|
||||||
next
|
|
||||||
end
|
|
||||||
|
|
||||||
when :media_after_name
|
|
||||||
if match = scan(/\{/)
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
states[-1] = :media
|
|
||||||
next
|
|
||||||
end
|
|
||||||
|
|
||||||
else
|
|
||||||
#:nocov:
|
|
||||||
raise_inspect 'Unknown state', encoder
|
|
||||||
#:nocov:
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif match = scan(/\/\*(?:.*?\*\/|\z)/m)
|
|
||||||
encoder.text_token match, :comment
|
|
||||||
|
|
||||||
elsif match = scan(/\{/)
|
|
||||||
value_expected = false
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
states.push :block
|
|
||||||
|
|
||||||
elsif match = scan(/\}/)
|
|
||||||
value_expected = false
|
|
||||||
if states.last == :block || states.last == :media
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
states.pop
|
|
||||||
else
|
|
||||||
encoder.text_token match, :error
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif match = scan(/#{RE::String}/o)
|
|
||||||
encoder.begin_group :string
|
|
||||||
encoder.text_token match[0, 1], :delimiter
|
|
||||||
encoder.text_token match[1..-2], :content if match.size > 2
|
|
||||||
encoder.text_token match[-1, 1], :delimiter if match.size >= 2
|
|
||||||
encoder.end_group :string
|
|
||||||
|
|
||||||
elsif match = scan(/#{RE::Function}/o)
|
|
||||||
encoder.begin_group :string
|
|
||||||
start = match[/^\w+\(/]
|
|
||||||
encoder.text_token start, :delimiter
|
|
||||||
if match[-1] == ?)
|
|
||||||
encoder.text_token match[start.size..-2], :content
|
|
||||||
encoder.text_token ')', :delimiter
|
|
||||||
else
|
|
||||||
encoder.text_token match[start.size..-1], :content
|
|
||||||
end
|
|
||||||
encoder.end_group :string
|
|
||||||
|
|
||||||
elsif match = scan(/(?: #{RE::Dimension} | #{RE::Percentage} | #{RE::Num} )/ox)
|
|
||||||
encoder.text_token match, :float
|
|
||||||
|
|
||||||
elsif match = scan(/#{RE::Color}/o)
|
|
||||||
encoder.text_token match, :color
|
|
||||||
|
|
||||||
elsif match = scan(/! *important/)
|
|
||||||
encoder.text_token match, :important
|
|
||||||
|
|
||||||
elsif match = scan(/(?:rgb|hsl)a?\([^()\n]*\)?/)
|
|
||||||
encoder.text_token match, :color
|
|
||||||
|
|
||||||
elsif match = scan(RE::AtKeyword)
|
|
||||||
encoder.text_token match, :directive
|
|
||||||
|
|
||||||
elsif match = scan(/ [+>:;,.=()\/] /x)
|
|
||||||
if match == ':'
|
|
||||||
value_expected = true
|
|
||||||
elsif match == ';'
|
|
||||||
value_expected = false
|
|
||||||
end
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
|
|
||||||
else
|
|
||||||
encoder.text_token getch, :error
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,65 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
# = Debug Scanner
|
|
||||||
#
|
|
||||||
# Interprets the output of the Encoders::Debug encoder.
|
|
||||||
class Debug < Scanner
|
|
||||||
|
|
||||||
register_for :debug
|
|
||||||
title 'CodeRay Token Dump Import'
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def scan_tokens encoder, options
|
|
||||||
|
|
||||||
opened_tokens = []
|
|
||||||
|
|
||||||
until eos?
|
|
||||||
|
|
||||||
if match = scan(/\s+/)
|
|
||||||
encoder.text_token match, :space
|
|
||||||
|
|
||||||
elsif match = scan(/ (\w+) \( ( [^\)\\]* ( \\. [^\)\\]* )* ) \)? /x)
|
|
||||||
kind = self[1].to_sym
|
|
||||||
match = self[2].gsub(/\\(.)/m, '\1')
|
|
||||||
unless TokenKinds.has_key? kind
|
|
||||||
kind = :error
|
|
||||||
match = matched
|
|
||||||
end
|
|
||||||
encoder.text_token match, kind
|
|
||||||
|
|
||||||
elsif match = scan(/ (\w+) ([<\[]) /x)
|
|
||||||
kind = self[1].to_sym
|
|
||||||
opened_tokens << kind
|
|
||||||
case self[2]
|
|
||||||
when '<'
|
|
||||||
encoder.begin_group kind
|
|
||||||
when '['
|
|
||||||
encoder.begin_line kind
|
|
||||||
else
|
|
||||||
raise 'CodeRay bug: This case should not be reached.'
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif !opened_tokens.empty? && match = scan(/ > /x)
|
|
||||||
encoder.end_group opened_tokens.pop
|
|
||||||
|
|
||||||
elsif !opened_tokens.empty? && match = scan(/ \] /x)
|
|
||||||
encoder.end_line opened_tokens.pop
|
|
||||||
|
|
||||||
else
|
|
||||||
encoder.text_token getch, :space
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
encoder.end_group opened_tokens.pop until opened_tokens.empty?
|
|
||||||
|
|
||||||
encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,144 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
# Scanner for the Delphi language (Object Pascal).
|
|
||||||
#
|
|
||||||
# Alias: +pascal+
|
|
||||||
class Delphi < Scanner
|
|
||||||
|
|
||||||
register_for :delphi
|
|
||||||
file_extension 'pas'
|
|
||||||
|
|
||||||
KEYWORDS = [
|
|
||||||
'and', 'array', 'as', 'at', 'asm', 'at', 'begin', 'case', 'class',
|
|
||||||
'const', 'constructor', 'destructor', 'dispinterface', 'div', 'do',
|
|
||||||
'downto', 'else', 'end', 'except', 'exports', 'file', 'finalization',
|
|
||||||
'finally', 'for', 'function', 'goto', 'if', 'implementation', 'in',
|
|
||||||
'inherited', 'initialization', 'inline', 'interface', 'is', 'label',
|
|
||||||
'library', 'mod', 'nil', 'not', 'object', 'of', 'or', 'out', 'packed',
|
|
||||||
'procedure', 'program', 'property', 'raise', 'record', 'repeat',
|
|
||||||
'resourcestring', 'set', 'shl', 'shr', 'string', 'then', 'threadvar',
|
|
||||||
'to', 'try', 'type', 'unit', 'until', 'uses', 'var', 'while', 'with',
|
|
||||||
'xor', 'on',
|
|
||||||
] # :nodoc:
|
|
||||||
|
|
||||||
DIRECTIVES = [
|
|
||||||
'absolute', 'abstract', 'assembler', 'at', 'automated', 'cdecl',
|
|
||||||
'contains', 'deprecated', 'dispid', 'dynamic', 'export',
|
|
||||||
'external', 'far', 'forward', 'implements', 'local',
|
|
||||||
'near', 'nodefault', 'on', 'overload', 'override',
|
|
||||||
'package', 'pascal', 'platform', 'private', 'protected', 'public',
|
|
||||||
'published', 'read', 'readonly', 'register', 'reintroduce',
|
|
||||||
'requires', 'resident', 'safecall', 'stdcall', 'stored', 'varargs',
|
|
||||||
'virtual', 'write', 'writeonly',
|
|
||||||
] # :nodoc:
|
|
||||||
|
|
||||||
IDENT_KIND = WordList::CaseIgnoring.new(:ident).
|
|
||||||
add(KEYWORDS, :keyword).
|
|
||||||
add(DIRECTIVES, :directive) # :nodoc:
|
|
||||||
|
|
||||||
NAME_FOLLOWS = WordList::CaseIgnoring.new(false).
|
|
||||||
add(%w(procedure function .)) # :nodoc:
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def scan_tokens encoder, options
|
|
||||||
|
|
||||||
state = :initial
|
|
||||||
last_token = ''
|
|
||||||
|
|
||||||
until eos?
|
|
||||||
|
|
||||||
if state == :initial
|
|
||||||
|
|
||||||
if match = scan(/ \s+ /x)
|
|
||||||
encoder.text_token match, :space
|
|
||||||
next
|
|
||||||
|
|
||||||
elsif match = scan(%r! \{ \$ [^}]* \}? | \(\* \$ (?: .*? \*\) | .* ) !mx)
|
|
||||||
encoder.text_token match, :preprocessor
|
|
||||||
next
|
|
||||||
|
|
||||||
elsif match = scan(%r! // [^\n]* | \{ [^}]* \}? | \(\* (?: .*? \*\) | .* ) !mx)
|
|
||||||
encoder.text_token match, :comment
|
|
||||||
next
|
|
||||||
|
|
||||||
elsif match = scan(/ <[>=]? | >=? | :=? | [-+=*\/;,@\^|\(\)\[\]] | \.\. /x)
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
|
|
||||||
elsif match = scan(/\./)
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
next if last_token == 'end'
|
|
||||||
|
|
||||||
elsif match = scan(/ [A-Za-z_][A-Za-z_0-9]* /x)
|
|
||||||
encoder.text_token match, NAME_FOLLOWS[last_token] ? :ident : IDENT_KIND[match]
|
|
||||||
|
|
||||||
elsif match = skip(/ ' ( [^\n']|'' ) (?:'|$) /x)
|
|
||||||
encoder.begin_group :char
|
|
||||||
encoder.text_token "'", :delimiter
|
|
||||||
encoder.text_token self[1], :content
|
|
||||||
encoder.text_token "'", :delimiter
|
|
||||||
encoder.end_group :char
|
|
||||||
next
|
|
||||||
|
|
||||||
elsif match = scan(/ ' /x)
|
|
||||||
encoder.begin_group :string
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
state = :string
|
|
||||||
|
|
||||||
elsif match = scan(/ \# (?: \d+ | \$[0-9A-Fa-f]+ ) /x)
|
|
||||||
encoder.text_token match, :char
|
|
||||||
|
|
||||||
elsif match = scan(/ \$ [0-9A-Fa-f]+ /x)
|
|
||||||
encoder.text_token match, :hex
|
|
||||||
|
|
||||||
elsif match = scan(/ (?: \d+ ) (?![eE]|\.[^.]) /x)
|
|
||||||
encoder.text_token match, :integer
|
|
||||||
|
|
||||||
elsif match = scan(/ \d+ (?: \.\d+ (?: [eE][+-]? \d+ )? | [eE][+-]? \d+ ) /x)
|
|
||||||
encoder.text_token match, :float
|
|
||||||
|
|
||||||
else
|
|
||||||
encoder.text_token getch, :error
|
|
||||||
next
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif state == :string
|
|
||||||
if match = scan(/[^\n']+/)
|
|
||||||
encoder.text_token match, :content
|
|
||||||
elsif match = scan(/''/)
|
|
||||||
encoder.text_token match, :char
|
|
||||||
elsif match = scan(/'/)
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
encoder.end_group :string
|
|
||||||
state = :initial
|
|
||||||
next
|
|
||||||
elsif match = scan(/\n/)
|
|
||||||
encoder.end_group :string
|
|
||||||
encoder.text_token match, :space
|
|
||||||
state = :initial
|
|
||||||
else
|
|
||||||
raise "else case \' reached; %p not handled." % peek(1), encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
else
|
|
||||||
raise 'else-case reached', encoder
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
last_token = match
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
if state == :string
|
|
||||||
encoder.end_group state
|
|
||||||
end
|
|
||||||
|
|
||||||
encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,201 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
# Scanner for output of the diff command.
|
|
||||||
#
|
|
||||||
# Alias: +patch+
|
|
||||||
class Diff < Scanner
|
|
||||||
|
|
||||||
register_for :diff
|
|
||||||
title 'diff output'
|
|
||||||
|
|
||||||
DEFAULT_OPTIONS = {
|
|
||||||
:highlight_code => true,
|
|
||||||
:inline_diff => true,
|
|
||||||
}
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
require 'coderay/helpers/file_type'
|
|
||||||
|
|
||||||
def scan_tokens encoder, options
|
|
||||||
|
|
||||||
line_kind = nil
|
|
||||||
state = :initial
|
|
||||||
deleted_lines = 0
|
|
||||||
scanners = Hash.new do |h, lang|
|
|
||||||
h[lang] = Scanners[lang].new '', :keep_tokens => true, :keep_state => true
|
|
||||||
end
|
|
||||||
content_scanner = scanners[:plain]
|
|
||||||
content_scanner_entry_state = nil
|
|
||||||
|
|
||||||
until eos?
|
|
||||||
|
|
||||||
if match = scan(/\n/)
|
|
||||||
deleted_lines = 0 unless line_kind == :delete
|
|
||||||
if line_kind
|
|
||||||
encoder.end_line line_kind
|
|
||||||
line_kind = nil
|
|
||||||
end
|
|
||||||
encoder.text_token match, :space
|
|
||||||
next
|
|
||||||
end
|
|
||||||
|
|
||||||
case state
|
|
||||||
|
|
||||||
when :initial
|
|
||||||
if match = scan(/--- |\+\+\+ |=+|_+/)
|
|
||||||
encoder.begin_line line_kind = :head
|
|
||||||
encoder.text_token match, :head
|
|
||||||
if match = scan(/.*?(?=$|[\t\n\x00]| \(revision)/)
|
|
||||||
encoder.text_token match, :filename
|
|
||||||
if options[:highlight_code]
|
|
||||||
file_type = FileType.fetch(match, :text)
|
|
||||||
file_type = :text if file_type == :diff
|
|
||||||
content_scanner = scanners[file_type]
|
|
||||||
content_scanner_entry_state = nil
|
|
||||||
end
|
|
||||||
end
|
|
||||||
next unless match = scan(/.+/)
|
|
||||||
encoder.text_token match, :plain
|
|
||||||
elsif match = scan(/Index: |Property changes on: /)
|
|
||||||
encoder.begin_line line_kind = :head
|
|
||||||
encoder.text_token match, :head
|
|
||||||
next unless match = scan(/.+/)
|
|
||||||
encoder.text_token match, :plain
|
|
||||||
elsif match = scan(/Added: /)
|
|
||||||
encoder.begin_line line_kind = :head
|
|
||||||
encoder.text_token match, :head
|
|
||||||
next unless match = scan(/.+/)
|
|
||||||
encoder.text_token match, :plain
|
|
||||||
state = :added
|
|
||||||
elsif match = scan(/\\ .*/)
|
|
||||||
encoder.text_token match, :comment
|
|
||||||
elsif match = scan(/@@(?>[^@\n]*)@@/)
|
|
||||||
content_scanner.state = :initial unless match?(/\n\+/)
|
|
||||||
content_scanner_entry_state = nil
|
|
||||||
if check(/\n|$/)
|
|
||||||
encoder.begin_line line_kind = :change
|
|
||||||
else
|
|
||||||
encoder.begin_group :change
|
|
||||||
end
|
|
||||||
encoder.text_token match[0,2], :change
|
|
||||||
encoder.text_token match[2...-2], :plain
|
|
||||||
encoder.text_token match[-2,2], :change
|
|
||||||
encoder.end_group :change unless line_kind
|
|
||||||
next unless match = scan(/.+/)
|
|
||||||
if options[:highlight_code]
|
|
||||||
content_scanner.tokenize match, :tokens => encoder
|
|
||||||
else
|
|
||||||
encoder.text_token match, :plain
|
|
||||||
end
|
|
||||||
next
|
|
||||||
elsif match = scan(/\+/)
|
|
||||||
encoder.begin_line line_kind = :insert
|
|
||||||
encoder.text_token match, :insert
|
|
||||||
next unless match = scan(/.+/)
|
|
||||||
if options[:highlight_code]
|
|
||||||
content_scanner.tokenize match, :tokens => encoder
|
|
||||||
else
|
|
||||||
encoder.text_token match, :plain
|
|
||||||
end
|
|
||||||
next
|
|
||||||
elsif match = scan(/-/)
|
|
||||||
deleted_lines += 1
|
|
||||||
encoder.begin_line line_kind = :delete
|
|
||||||
encoder.text_token match, :delete
|
|
||||||
if options[:inline_diff] && deleted_lines == 1 && check(/(?>.*)\n\+(?>.*)$(?!\n\+)/)
|
|
||||||
content_scanner_entry_state = content_scanner.state
|
|
||||||
skip(/(.*)\n\+(.*)$/)
|
|
||||||
head, deletion, insertion, tail = diff self[1], self[2]
|
|
||||||
pre, deleted, post = content_scanner.tokenize [head, deletion, tail], :tokens => Tokens.new
|
|
||||||
encoder.tokens pre
|
|
||||||
unless deleted.empty?
|
|
||||||
encoder.begin_group :eyecatcher
|
|
||||||
encoder.tokens deleted
|
|
||||||
encoder.end_group :eyecatcher
|
|
||||||
end
|
|
||||||
encoder.tokens post
|
|
||||||
encoder.end_line line_kind
|
|
||||||
encoder.text_token "\n", :space
|
|
||||||
encoder.begin_line line_kind = :insert
|
|
||||||
encoder.text_token '+', :insert
|
|
||||||
content_scanner.state = content_scanner_entry_state || :initial
|
|
||||||
pre, inserted, post = content_scanner.tokenize [head, insertion, tail], :tokens => Tokens.new
|
|
||||||
encoder.tokens pre
|
|
||||||
unless inserted.empty?
|
|
||||||
encoder.begin_group :eyecatcher
|
|
||||||
encoder.tokens inserted
|
|
||||||
encoder.end_group :eyecatcher
|
|
||||||
end
|
|
||||||
encoder.tokens post
|
|
||||||
elsif match = scan(/.*/)
|
|
||||||
if options[:highlight_code]
|
|
||||||
if deleted_lines == 1
|
|
||||||
content_scanner_entry_state = content_scanner.state
|
|
||||||
end
|
|
||||||
content_scanner.tokenize match, :tokens => encoder unless match.empty?
|
|
||||||
if !match?(/\n-/)
|
|
||||||
if match?(/\n\+/)
|
|
||||||
content_scanner.state = content_scanner_entry_state || :initial
|
|
||||||
end
|
|
||||||
content_scanner_entry_state = nil
|
|
||||||
end
|
|
||||||
else
|
|
||||||
encoder.text_token match, :plain
|
|
||||||
end
|
|
||||||
end
|
|
||||||
next
|
|
||||||
elsif match = scan(/ .*/)
|
|
||||||
if options[:highlight_code]
|
|
||||||
content_scanner.tokenize match, :tokens => encoder
|
|
||||||
else
|
|
||||||
encoder.text_token match, :plain
|
|
||||||
end
|
|
||||||
next
|
|
||||||
elsif match = scan(/.+/)
|
|
||||||
encoder.begin_line line_kind = :comment
|
|
||||||
encoder.text_token match, :plain
|
|
||||||
else
|
|
||||||
raise_inspect 'else case rached'
|
|
||||||
end
|
|
||||||
|
|
||||||
when :added
|
|
||||||
if match = scan(/ \+/)
|
|
||||||
encoder.begin_line line_kind = :insert
|
|
||||||
encoder.text_token match, :insert
|
|
||||||
next unless match = scan(/.+/)
|
|
||||||
encoder.text_token match, :plain
|
|
||||||
else
|
|
||||||
state = :initial
|
|
||||||
next
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
encoder.end_line line_kind if line_kind
|
|
||||||
|
|
||||||
encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
private
|
|
||||||
|
|
||||||
def diff a, b
|
|
||||||
# i will be the index of the leftmost difference from the left.
|
|
||||||
i_max = [a.size, b.size].min
|
|
||||||
i = 0
|
|
||||||
i += 1 while i < i_max && a[i] == b[i]
|
|
||||||
# j_min will be the index of the leftmost difference from the right.
|
|
||||||
j_min = i - i_max
|
|
||||||
# j will be the index of the rightmost difference from the right which
|
|
||||||
# does not precede the leftmost one from the left.
|
|
||||||
j = -1
|
|
||||||
j -= 1 while j >= j_min && a[j] == b[j]
|
|
||||||
return a[0...i], a[i..j], b[i..j], (j < -1) ? a[j+1..-1] : ''
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,81 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
load :html
|
|
||||||
load :ruby
|
|
||||||
|
|
||||||
# Scanner for HTML ERB templates.
|
|
||||||
class ERB < Scanner
|
|
||||||
|
|
||||||
register_for :erb
|
|
||||||
title 'HTML ERB Template'
|
|
||||||
|
|
||||||
KINDS_NOT_LOC = HTML::KINDS_NOT_LOC
|
|
||||||
|
|
||||||
ERB_RUBY_BLOCK = /
|
|
||||||
(<%(?!%)[-=\#]?)
|
|
||||||
((?>
|
|
||||||
[^\-%]* # normal*
|
|
||||||
(?> # special
|
|
||||||
(?: %(?!>) | -(?!%>) )
|
|
||||||
[^\-%]* # normal*
|
|
||||||
)*
|
|
||||||
))
|
|
||||||
((?: -?%> )?)
|
|
||||||
/x # :nodoc:
|
|
||||||
|
|
||||||
START_OF_ERB = /
|
|
||||||
<%(?!%)
|
|
||||||
/x # :nodoc:
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def setup
|
|
||||||
@ruby_scanner = CodeRay.scanner :ruby, :tokens => @tokens, :keep_tokens => true
|
|
||||||
@html_scanner = CodeRay.scanner :html, :tokens => @tokens, :keep_tokens => true, :keep_state => true
|
|
||||||
end
|
|
||||||
|
|
||||||
def reset_instance
|
|
||||||
super
|
|
||||||
@html_scanner.reset
|
|
||||||
end
|
|
||||||
|
|
||||||
def scan_tokens encoder, options
|
|
||||||
|
|
||||||
until eos?
|
|
||||||
|
|
||||||
if (match = scan_until(/(?=#{START_OF_ERB})/o) || scan_rest) and not match.empty?
|
|
||||||
@html_scanner.tokenize match, :tokens => encoder
|
|
||||||
|
|
||||||
elsif match = scan(/#{ERB_RUBY_BLOCK}/o)
|
|
||||||
start_tag = self[1]
|
|
||||||
code = self[2]
|
|
||||||
end_tag = self[3]
|
|
||||||
|
|
||||||
encoder.begin_group :inline
|
|
||||||
encoder.text_token start_tag, :inline_delimiter
|
|
||||||
|
|
||||||
if start_tag == '<%#'
|
|
||||||
encoder.text_token code, :comment
|
|
||||||
else
|
|
||||||
@ruby_scanner.tokenize code, :tokens => encoder
|
|
||||||
end unless code.empty?
|
|
||||||
|
|
||||||
encoder.text_token end_tag, :inline_delimiter unless end_tag.empty?
|
|
||||||
encoder.end_group :inline
|
|
||||||
|
|
||||||
else
|
|
||||||
raise_inspect 'else-case reached!', encoder
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
encoder
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,255 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
load :java
|
|
||||||
|
|
||||||
# Scanner for Groovy.
|
|
||||||
class Groovy < Java
|
|
||||||
|
|
||||||
register_for :groovy
|
|
||||||
|
|
||||||
# TODO: check list of keywords
|
|
||||||
GROOVY_KEYWORDS = %w[
|
|
||||||
as assert def in
|
|
||||||
] # :nodoc:
|
|
||||||
KEYWORDS_EXPECTING_VALUE = WordList.new.add %w[
|
|
||||||
case instanceof new return throw typeof while as assert in
|
|
||||||
] # :nodoc:
|
|
||||||
GROOVY_MAGIC_VARIABLES = %w[ it ] # :nodoc:
|
|
||||||
|
|
||||||
IDENT_KIND = Java::IDENT_KIND.dup.
|
|
||||||
add(GROOVY_KEYWORDS, :keyword).
|
|
||||||
add(GROOVY_MAGIC_VARIABLES, :local_variable) # :nodoc:
|
|
||||||
|
|
||||||
ESCAPE = / [bfnrtv$\n\\'"] | x[a-fA-F0-9]{1,2} | [0-7]{1,3} /x # :nodoc:
|
|
||||||
UNICODE_ESCAPE = / u[a-fA-F0-9]{4} /x # :nodoc: no 4-byte unicode chars? U[a-fA-F0-9]{8}
|
|
||||||
REGEXP_ESCAPE = / [bfnrtv\n\\'"] | x[a-fA-F0-9]{1,2} | [0-7]{1,3} | \d | [bBdDsSwW\/] /x # :nodoc:
|
|
||||||
|
|
||||||
# TODO: interpretation inside ', ", /
|
|
||||||
STRING_CONTENT_PATTERN = {
|
|
||||||
"'" => /(?>\\[^\\'\n]+|[^\\'\n]+)+/,
|
|
||||||
'"' => /[^\\$"\n]+/,
|
|
||||||
"'''" => /(?>[^\\']+|'(?!''))+/,
|
|
||||||
'"""' => /(?>[^\\$"]+|"(?!""))+/,
|
|
||||||
'/' => /[^\\$\/\n]+/,
|
|
||||||
} # :nodoc:
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def scan_tokens encoder, options
|
|
||||||
|
|
||||||
state = :initial
|
|
||||||
inline_block_stack = []
|
|
||||||
inline_block_paren_depth = nil
|
|
||||||
string_delimiter = nil
|
|
||||||
import_clause = class_name_follows = last_token = after_def = false
|
|
||||||
value_expected = true
|
|
||||||
|
|
||||||
until eos?
|
|
||||||
|
|
||||||
case state
|
|
||||||
|
|
||||||
when :initial
|
|
||||||
|
|
||||||
if match = scan(/ \s+ | \\\n /x)
|
|
||||||
encoder.text_token match, :space
|
|
||||||
if match.index ?\n
|
|
||||||
import_clause = after_def = false
|
|
||||||
value_expected = true unless value_expected
|
|
||||||
end
|
|
||||||
next
|
|
||||||
|
|
||||||
elsif match = scan(%r! // [^\n\\]* (?: \\. [^\n\\]* )* | /\* (?: .*? \*/ | .* ) !mx)
|
|
||||||
value_expected = true
|
|
||||||
after_def = false
|
|
||||||
encoder.text_token match, :comment
|
|
||||||
|
|
||||||
elsif bol? && match = scan(/ \#!.* /x)
|
|
||||||
encoder.text_token match, :doctype
|
|
||||||
|
|
||||||
elsif import_clause && match = scan(/ (?!as) #{IDENT} (?: \. #{IDENT} )* (?: \.\* )? /ox)
|
|
||||||
after_def = value_expected = false
|
|
||||||
encoder.text_token match, :include
|
|
||||||
|
|
||||||
elsif match = scan(/ #{IDENT} | \[\] /ox)
|
|
||||||
kind = IDENT_KIND[match]
|
|
||||||
value_expected = (kind == :keyword) && KEYWORDS_EXPECTING_VALUE[match]
|
|
||||||
if last_token == '.'
|
|
||||||
kind = :ident
|
|
||||||
elsif class_name_follows
|
|
||||||
kind = :class
|
|
||||||
class_name_follows = false
|
|
||||||
elsif after_def && check(/\s*[({]/)
|
|
||||||
kind = :method
|
|
||||||
after_def = false
|
|
||||||
elsif kind == :ident && last_token != '?' && check(/:/)
|
|
||||||
kind = :key
|
|
||||||
else
|
|
||||||
class_name_follows = true if match == 'class' || (import_clause && match == 'as')
|
|
||||||
import_clause = match == 'import'
|
|
||||||
after_def = true if match == 'def'
|
|
||||||
end
|
|
||||||
encoder.text_token match, kind
|
|
||||||
|
|
||||||
elsif match = scan(/;/)
|
|
||||||
import_clause = after_def = false
|
|
||||||
value_expected = true
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
|
|
||||||
elsif match = scan(/\{/)
|
|
||||||
class_name_follows = after_def = false
|
|
||||||
value_expected = true
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
if !inline_block_stack.empty?
|
|
||||||
inline_block_paren_depth += 1
|
|
||||||
end
|
|
||||||
|
|
||||||
# TODO: ~'...', ~"..." and ~/.../ style regexps
|
|
||||||
elsif match = scan(/ \.\.<? | \*?\.(?!\d)@? | \.& | \?:? | [,?:(\[] | -[->] | \+\+ |
|
|
||||||
&& | \|\| | \*\*=? | ==?~ | <=?>? | [-+*%^~&|>=!]=? | <<<?=? | >>>?=? /x)
|
|
||||||
value_expected = true
|
|
||||||
value_expected = :regexp if match == '~'
|
|
||||||
after_def = false
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
|
|
||||||
elsif match = scan(/ [)\]}] /x)
|
|
||||||
value_expected = after_def = false
|
|
||||||
if !inline_block_stack.empty? && match == '}'
|
|
||||||
inline_block_paren_depth -= 1
|
|
||||||
if inline_block_paren_depth == 0 # closing brace of inline block reached
|
|
||||||
encoder.text_token match, :inline_delimiter
|
|
||||||
encoder.end_group :inline
|
|
||||||
state, string_delimiter, inline_block_paren_depth = inline_block_stack.pop
|
|
||||||
next
|
|
||||||
end
|
|
||||||
end
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
|
|
||||||
elsif check(/[\d.]/)
|
|
||||||
after_def = value_expected = false
|
|
||||||
if match = scan(/0[xX][0-9A-Fa-f]+/)
|
|
||||||
encoder.text_token match, :hex
|
|
||||||
elsif match = scan(/(?>0[0-7]+)(?![89.eEfF])/)
|
|
||||||
encoder.text_token match, :octal
|
|
||||||
elsif match = scan(/\d+[fFdD]|\d*\.\d+(?:[eE][+-]?\d+)?[fFdD]?|\d+[eE][+-]?\d+[fFdD]?/)
|
|
||||||
encoder.text_token match, :float
|
|
||||||
elsif match = scan(/\d+[lLgG]?/)
|
|
||||||
encoder.text_token match, :integer
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif match = scan(/'''|"""/)
|
|
||||||
after_def = value_expected = false
|
|
||||||
state = :multiline_string
|
|
||||||
encoder.begin_group :string
|
|
||||||
string_delimiter = match
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
|
|
||||||
# TODO: record.'name' syntax
|
|
||||||
elsif match = scan(/["']/)
|
|
||||||
after_def = value_expected = false
|
|
||||||
state = match == '/' ? :regexp : :string
|
|
||||||
encoder.begin_group state
|
|
||||||
string_delimiter = match
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
|
|
||||||
elsif value_expected && match = scan(/\//)
|
|
||||||
after_def = value_expected = false
|
|
||||||
encoder.begin_group :regexp
|
|
||||||
state = :regexp
|
|
||||||
string_delimiter = '/'
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
|
|
||||||
elsif match = scan(/ @ #{IDENT} /ox)
|
|
||||||
after_def = value_expected = false
|
|
||||||
encoder.text_token match, :annotation
|
|
||||||
|
|
||||||
elsif match = scan(/\//)
|
|
||||||
after_def = false
|
|
||||||
value_expected = true
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
|
|
||||||
else
|
|
||||||
encoder.text_token getch, :error
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
when :string, :regexp, :multiline_string
|
|
||||||
if match = scan(STRING_CONTENT_PATTERN[string_delimiter])
|
|
||||||
encoder.text_token match, :content
|
|
||||||
|
|
||||||
elsif match = scan(state == :multiline_string ? /'''|"""/ : /["'\/]/)
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
if state == :regexp
|
|
||||||
# TODO: regexp modifiers? s, m, x, i?
|
|
||||||
modifiers = scan(/[ix]+/)
|
|
||||||
encoder.text_token modifiers, :modifier if modifiers && !modifiers.empty?
|
|
||||||
end
|
|
||||||
state = :string if state == :multiline_string
|
|
||||||
encoder.end_group state
|
|
||||||
string_delimiter = nil
|
|
||||||
after_def = value_expected = false
|
|
||||||
state = :initial
|
|
||||||
next
|
|
||||||
|
|
||||||
elsif (state == :string || state == :multiline_string) &&
|
|
||||||
(match = scan(/ \\ (?: #{ESCAPE} | #{UNICODE_ESCAPE} ) /mox))
|
|
||||||
if string_delimiter[0] == ?' && !(match == "\\\\" || match == "\\'")
|
|
||||||
encoder.text_token match, :content
|
|
||||||
else
|
|
||||||
encoder.text_token match, :char
|
|
||||||
end
|
|
||||||
elsif state == :regexp && match = scan(/ \\ (?: #{REGEXP_ESCAPE} | #{UNICODE_ESCAPE} ) /mox)
|
|
||||||
encoder.text_token match, :char
|
|
||||||
|
|
||||||
elsif match = scan(/ \$ #{IDENT} /mox)
|
|
||||||
encoder.begin_group :inline
|
|
||||||
encoder.text_token '$', :inline_delimiter
|
|
||||||
match = match[1..-1]
|
|
||||||
encoder.text_token match, IDENT_KIND[match]
|
|
||||||
encoder.end_group :inline
|
|
||||||
next
|
|
||||||
elsif match = scan(/ \$ \{ /x)
|
|
||||||
encoder.begin_group :inline
|
|
||||||
encoder.text_token match, :inline_delimiter
|
|
||||||
inline_block_stack << [state, string_delimiter, inline_block_paren_depth]
|
|
||||||
inline_block_paren_depth = 1
|
|
||||||
state = :initial
|
|
||||||
next
|
|
||||||
|
|
||||||
elsif match = scan(/ \$ /mx)
|
|
||||||
encoder.text_token match, :content
|
|
||||||
|
|
||||||
elsif match = scan(/ \\. /mx)
|
|
||||||
encoder.text_token match, :content # TODO: Shouldn't this be :error?
|
|
||||||
|
|
||||||
elsif match = scan(/ \\ | \n /x)
|
|
||||||
encoder.end_group state
|
|
||||||
encoder.text_token match, :error
|
|
||||||
after_def = value_expected = false
|
|
||||||
state = :initial
|
|
||||||
|
|
||||||
else
|
|
||||||
raise_inspect "else case \" reached; %p not handled." % peek(1), encoder
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
else
|
|
||||||
raise_inspect 'Unknown state', encoder
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
last_token = match unless [:space, :comment, :doctype].include? kind
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
if [:multiline_string, :string, :regexp].include? state
|
|
||||||
encoder.end_group state
|
|
||||||
end
|
|
||||||
|
|
||||||
encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,168 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
load :ruby
|
|
||||||
load :html
|
|
||||||
load :java_script
|
|
||||||
|
|
||||||
class HAML < Scanner
|
|
||||||
|
|
||||||
register_for :haml
|
|
||||||
title 'HAML Template'
|
|
||||||
|
|
||||||
KINDS_NOT_LOC = HTML::KINDS_NOT_LOC
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def setup
|
|
||||||
super
|
|
||||||
@ruby_scanner = CodeRay.scanner :ruby, :tokens => @tokens, :keep_tokens => true
|
|
||||||
@embedded_ruby_scanner = CodeRay.scanner :ruby, :tokens => @tokens, :keep_tokens => true, :state => @ruby_scanner.interpreted_string_state
|
|
||||||
@html_scanner = CodeRay.scanner :html, :tokens => @tokens, :keep_tokens => true
|
|
||||||
end
|
|
||||||
|
|
||||||
def scan_tokens encoder, options
|
|
||||||
|
|
||||||
match = nil
|
|
||||||
code = ''
|
|
||||||
|
|
||||||
until eos?
|
|
||||||
|
|
||||||
if bol?
|
|
||||||
if match = scan(/!!!.*/)
|
|
||||||
encoder.text_token match, :doctype
|
|
||||||
next
|
|
||||||
end
|
|
||||||
|
|
||||||
if match = scan(/(?>( *)(\/(?!\[if)|-\#|:javascript|:ruby|:\w+) *)(?=\n)/)
|
|
||||||
encoder.text_token match, :comment
|
|
||||||
|
|
||||||
code = self[2]
|
|
||||||
if match = scan(/(?:\n+#{self[1]} .*)+/)
|
|
||||||
case code
|
|
||||||
when '/', '-#'
|
|
||||||
encoder.text_token match, :comment
|
|
||||||
when ':javascript'
|
|
||||||
# TODO: recognize #{...} snippets inside JavaScript
|
|
||||||
@java_script_scanner ||= CodeRay.scanner :java_script, :tokens => @tokens, :keep_tokens => true
|
|
||||||
@java_script_scanner.tokenize match, :tokens => encoder
|
|
||||||
when ':ruby'
|
|
||||||
@ruby_scanner.tokenize match, :tokens => encoder
|
|
||||||
when /:\w+/
|
|
||||||
encoder.text_token match, :comment
|
|
||||||
else
|
|
||||||
raise 'else-case reached: %p' % [code]
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
if match = scan(/ +/)
|
|
||||||
encoder.text_token match, :space
|
|
||||||
end
|
|
||||||
|
|
||||||
if match = scan(/\/.*/)
|
|
||||||
encoder.text_token match, :comment
|
|
||||||
next
|
|
||||||
end
|
|
||||||
|
|
||||||
if match = scan(/\\/)
|
|
||||||
encoder.text_token match, :plain
|
|
||||||
if match = scan(/.+/)
|
|
||||||
@html_scanner.tokenize match, :tokens => encoder
|
|
||||||
end
|
|
||||||
next
|
|
||||||
end
|
|
||||||
|
|
||||||
tag = false
|
|
||||||
|
|
||||||
if match = scan(/%[\w:]+\/?/)
|
|
||||||
encoder.text_token match, :tag
|
|
||||||
# if match = scan(/( +)(.+)/)
|
|
||||||
# encoder.text_token self[1], :space
|
|
||||||
# @embedded_ruby_scanner.tokenize self[2], :tokens => encoder
|
|
||||||
# end
|
|
||||||
tag = true
|
|
||||||
end
|
|
||||||
|
|
||||||
while match = scan(/([.#])[-\w]*\w/)
|
|
||||||
encoder.text_token match, self[1] == '#' ? :constant : :class
|
|
||||||
tag = true
|
|
||||||
end
|
|
||||||
|
|
||||||
if tag && match = scan(/(\()([^)]+)?(\))?/)
|
|
||||||
# TODO: recognize title=@title, class="widget_#{@widget.number}"
|
|
||||||
encoder.text_token self[1], :plain
|
|
||||||
@html_scanner.tokenize self[2], :tokens => encoder, :state => :attribute if self[2]
|
|
||||||
encoder.text_token self[3], :plain if self[3]
|
|
||||||
end
|
|
||||||
|
|
||||||
if tag && match = scan(/\{/)
|
|
||||||
encoder.text_token match, :plain
|
|
||||||
|
|
||||||
code = ''
|
|
||||||
level = 1
|
|
||||||
while true
|
|
||||||
code << scan(/([^\{\},\n]|, *\n?)*/)
|
|
||||||
case match = getch
|
|
||||||
when '{'
|
|
||||||
level += 1
|
|
||||||
code << match
|
|
||||||
when '}'
|
|
||||||
level -= 1
|
|
||||||
if level > 0
|
|
||||||
code << match
|
|
||||||
else
|
|
||||||
break
|
|
||||||
end
|
|
||||||
when "\n", ",", nil
|
|
||||||
break
|
|
||||||
end
|
|
||||||
end
|
|
||||||
@ruby_scanner.tokenize code, :tokens => encoder unless code.empty?
|
|
||||||
|
|
||||||
encoder.text_token match, :plain if match
|
|
||||||
end
|
|
||||||
|
|
||||||
if tag && match = scan(/(\[)([^\]\n]+)?(\])?/)
|
|
||||||
encoder.text_token self[1], :plain
|
|
||||||
@ruby_scanner.tokenize self[2], :tokens => encoder if self[2]
|
|
||||||
encoder.text_token self[3], :plain if self[3]
|
|
||||||
end
|
|
||||||
|
|
||||||
if tag && match = scan(/\//)
|
|
||||||
encoder.text_token match, :tag
|
|
||||||
end
|
|
||||||
|
|
||||||
if scan(/(>?<?[-=]|[&!]=|(& |!)|~)( *)([^,\n\|]+(?:(, *|\|(?=.|\n.*\|$))\n?[^,\n\|]*)*)?/)
|
|
||||||
encoder.text_token self[1] + self[3], :plain
|
|
||||||
if self[4]
|
|
||||||
if self[2]
|
|
||||||
@embedded_ruby_scanner.tokenize self[4], :tokens => encoder
|
|
||||||
else
|
|
||||||
@ruby_scanner.tokenize self[4], :tokens => encoder
|
|
||||||
end
|
|
||||||
end
|
|
||||||
elsif match = scan(/((?:<|><?)(?![!?\/\w]))?(.+)?/)
|
|
||||||
encoder.text_token self[1], :plain if self[1]
|
|
||||||
# TODO: recognize #{...} snippets
|
|
||||||
@html_scanner.tokenize self[2], :tokens => encoder if self[2]
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif match = scan(/.+/)
|
|
||||||
@html_scanner.tokenize match, :tokens => encoder
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
if match = scan(/\n/)
|
|
||||||
encoder.text_token match, :space
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
encoder
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,253 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
# HTML Scanner
|
|
||||||
#
|
|
||||||
# Alias: +xhtml+
|
|
||||||
#
|
|
||||||
# See also: Scanners::XML
|
|
||||||
class HTML < Scanner
|
|
||||||
|
|
||||||
register_for :html
|
|
||||||
|
|
||||||
KINDS_NOT_LOC = [
|
|
||||||
:comment, :doctype, :preprocessor,
|
|
||||||
:tag, :attribute_name, :operator,
|
|
||||||
:attribute_value, :string,
|
|
||||||
:plain, :entity, :error,
|
|
||||||
] # :nodoc:
|
|
||||||
|
|
||||||
EVENT_ATTRIBUTES = %w(
|
|
||||||
onabort onafterprint onbeforeprint onbeforeunload onblur oncanplay
|
|
||||||
oncanplaythrough onchange onclick oncontextmenu oncuechange ondblclick
|
|
||||||
ondrag ondragdrop ondragend ondragenter ondragleave ondragover
|
|
||||||
ondragstart ondrop ondurationchange onemptied onended onerror onfocus
|
|
||||||
onformchange onforminput onhashchange oninput oninvalid onkeydown
|
|
||||||
onkeypress onkeyup onload onloadeddata onloadedmetadata onloadstart
|
|
||||||
onmessage onmousedown onmousemove onmouseout onmouseover onmouseup
|
|
||||||
onmousewheel onmove onoffline ononline onpagehide onpageshow onpause
|
|
||||||
onplay onplaying onpopstate onprogress onratechange onreadystatechange
|
|
||||||
onredo onreset onresize onscroll onseeked onseeking onselect onshow
|
|
||||||
onstalled onstorage onsubmit onsuspend ontimeupdate onundo onunload
|
|
||||||
onvolumechange onwaiting
|
|
||||||
)
|
|
||||||
|
|
||||||
IN_ATTRIBUTE = WordList::CaseIgnoring.new(nil).
|
|
||||||
add(EVENT_ATTRIBUTES, :script)
|
|
||||||
|
|
||||||
ATTR_NAME = /[\w.:-]+/ # :nodoc:
|
|
||||||
TAG_END = /\/?>/ # :nodoc:
|
|
||||||
HEX = /[0-9a-fA-F]/ # :nodoc:
|
|
||||||
ENTITY = /
|
|
||||||
&
|
|
||||||
(?:
|
|
||||||
\w+
|
|
||||||
|
|
|
||||||
\#
|
|
||||||
(?:
|
|
||||||
\d+
|
|
||||||
|
|
|
||||||
x#{HEX}+
|
|
||||||
)
|
|
||||||
)
|
|
||||||
;
|
|
||||||
/ox # :nodoc:
|
|
||||||
|
|
||||||
PLAIN_STRING_CONTENT = {
|
|
||||||
"'" => /[^&'>\n]+/,
|
|
||||||
'"' => /[^&">\n]+/,
|
|
||||||
} # :nodoc:
|
|
||||||
|
|
||||||
def reset
|
|
||||||
super
|
|
||||||
@state = :initial
|
|
||||||
@plain_string_content = nil
|
|
||||||
end
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def setup
|
|
||||||
@state = :initial
|
|
||||||
@plain_string_content = nil
|
|
||||||
end
|
|
||||||
|
|
||||||
def scan_java_script encoder, code
|
|
||||||
if code && !code.empty?
|
|
||||||
@java_script_scanner ||= Scanners::JavaScript.new '', :keep_tokens => true
|
|
||||||
# encoder.begin_group :inline
|
|
||||||
@java_script_scanner.tokenize code, :tokens => encoder
|
|
||||||
# encoder.end_group :inline
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
def scan_tokens encoder, options
|
|
||||||
state = options[:state] || @state
|
|
||||||
plain_string_content = @plain_string_content
|
|
||||||
in_tag = in_attribute = nil
|
|
||||||
|
|
||||||
encoder.begin_group :string if state == :attribute_value_string
|
|
||||||
|
|
||||||
until eos?
|
|
||||||
|
|
||||||
if state != :in_special_tag && match = scan(/\s+/m)
|
|
||||||
encoder.text_token match, :space
|
|
||||||
|
|
||||||
else
|
|
||||||
|
|
||||||
case state
|
|
||||||
|
|
||||||
when :initial
|
|
||||||
if match = scan(/<!--(?:.*?-->|.*)/m)
|
|
||||||
encoder.text_token match, :comment
|
|
||||||
elsif match = scan(/<!DOCTYPE(?:.*?>|.*)/m)
|
|
||||||
encoder.text_token match, :doctype
|
|
||||||
elsif match = scan(/<\?xml(?:.*?\?>|.*)/m)
|
|
||||||
encoder.text_token match, :preprocessor
|
|
||||||
elsif match = scan(/<\?(?:.*?\?>|.*)/m)
|
|
||||||
encoder.text_token match, :comment
|
|
||||||
elsif match = scan(/<\/[-\w.:]*>?/m)
|
|
||||||
in_tag = nil
|
|
||||||
encoder.text_token match, :tag
|
|
||||||
elsif match = scan(/<(?:(script)|[-\w.:]+)(>)?/m)
|
|
||||||
encoder.text_token match, :tag
|
|
||||||
in_tag = self[1]
|
|
||||||
if self[2]
|
|
||||||
state = :in_special_tag if in_tag
|
|
||||||
else
|
|
||||||
state = :attribute
|
|
||||||
end
|
|
||||||
elsif match = scan(/[^<>&]+/)
|
|
||||||
encoder.text_token match, :plain
|
|
||||||
elsif match = scan(/#{ENTITY}/ox)
|
|
||||||
encoder.text_token match, :entity
|
|
||||||
elsif match = scan(/[<>&]/)
|
|
||||||
in_tag = nil
|
|
||||||
encoder.text_token match, :error
|
|
||||||
else
|
|
||||||
raise_inspect '[BUG] else-case reached with state %p' % [state], encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
when :attribute
|
|
||||||
if match = scan(/#{TAG_END}/o)
|
|
||||||
encoder.text_token match, :tag
|
|
||||||
in_attribute = nil
|
|
||||||
if in_tag
|
|
||||||
state = :in_special_tag
|
|
||||||
else
|
|
||||||
state = :initial
|
|
||||||
end
|
|
||||||
elsif match = scan(/#{ATTR_NAME}/o)
|
|
||||||
in_attribute = IN_ATTRIBUTE[match]
|
|
||||||
encoder.text_token match, :attribute_name
|
|
||||||
state = :attribute_equal
|
|
||||||
else
|
|
||||||
in_tag = nil
|
|
||||||
encoder.text_token getch, :error
|
|
||||||
end
|
|
||||||
|
|
||||||
when :attribute_equal
|
|
||||||
if match = scan(/=/) #/
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
state = :attribute_value
|
|
||||||
elsif scan(/#{ATTR_NAME}/o) || scan(/#{TAG_END}/o)
|
|
||||||
state = :attribute
|
|
||||||
next
|
|
||||||
else
|
|
||||||
encoder.text_token getch, :error
|
|
||||||
state = :attribute
|
|
||||||
end
|
|
||||||
|
|
||||||
when :attribute_value
|
|
||||||
if match = scan(/#{ATTR_NAME}/o)
|
|
||||||
encoder.text_token match, :attribute_value
|
|
||||||
state = :attribute
|
|
||||||
elsif match = scan(/["']/)
|
|
||||||
if in_attribute == :script
|
|
||||||
encoder.begin_group :inline
|
|
||||||
encoder.text_token match, :inline_delimiter
|
|
||||||
if scan(/javascript:[ \t]*/)
|
|
||||||
encoder.text_token matched, :comment
|
|
||||||
end
|
|
||||||
code = scan_until(match == '"' ? /(?="|\z)/ : /(?='|\z)/)
|
|
||||||
scan_java_script encoder, code
|
|
||||||
match = scan(/["']/)
|
|
||||||
encoder.text_token match, :inline_delimiter if match
|
|
||||||
encoder.end_group :inline
|
|
||||||
state = :attribute
|
|
||||||
in_attribute = nil
|
|
||||||
else
|
|
||||||
encoder.begin_group :string
|
|
||||||
state = :attribute_value_string
|
|
||||||
plain_string_content = PLAIN_STRING_CONTENT[match]
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
end
|
|
||||||
elsif match = scan(/#{TAG_END}/o)
|
|
||||||
encoder.text_token match, :tag
|
|
||||||
state = :initial
|
|
||||||
else
|
|
||||||
encoder.text_token getch, :error
|
|
||||||
end
|
|
||||||
|
|
||||||
when :attribute_value_string
|
|
||||||
if match = scan(plain_string_content)
|
|
||||||
encoder.text_token match, :content
|
|
||||||
elsif match = scan(/['"]/)
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
encoder.end_group :string
|
|
||||||
state = :attribute
|
|
||||||
elsif match = scan(/#{ENTITY}/ox)
|
|
||||||
encoder.text_token match, :entity
|
|
||||||
elsif match = scan(/&/)
|
|
||||||
encoder.text_token match, :content
|
|
||||||
elsif match = scan(/[\n>]/)
|
|
||||||
encoder.end_group :string
|
|
||||||
state = :initial
|
|
||||||
encoder.text_token match, :error
|
|
||||||
end
|
|
||||||
|
|
||||||
when :in_special_tag
|
|
||||||
case in_tag
|
|
||||||
when 'script'
|
|
||||||
encoder.text_token match, :space if match = scan(/[ \t]*\n/)
|
|
||||||
if scan(/(\s*<!--)(?:(.*?)(-->)|(.*))/m)
|
|
||||||
code = self[2] || self[4]
|
|
||||||
closing = self[3]
|
|
||||||
encoder.text_token self[1], :comment
|
|
||||||
else
|
|
||||||
code = scan_until(/(?=(?:\n\s*)?<\/script>)|\z/)
|
|
||||||
closing = false
|
|
||||||
end
|
|
||||||
unless code.empty?
|
|
||||||
encoder.begin_group :inline
|
|
||||||
scan_java_script encoder, code
|
|
||||||
encoder.end_group :inline
|
|
||||||
end
|
|
||||||
encoder.text_token closing, :comment if closing
|
|
||||||
state = :initial
|
|
||||||
else
|
|
||||||
raise 'unknown special tag: %p' % [in_tag]
|
|
||||||
end
|
|
||||||
|
|
||||||
else
|
|
||||||
raise_inspect 'Unknown state: %p' % [state], encoder
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
if options[:keep_state]
|
|
||||||
@state = state
|
|
||||||
@plain_string_content = plain_string_content
|
|
||||||
end
|
|
||||||
|
|
||||||
encoder.end_group :string if state == :attribute_value_string
|
|
||||||
|
|
||||||
encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,174 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
# Scanner for Java.
|
|
||||||
class Java < Scanner
|
|
||||||
|
|
||||||
register_for :java
|
|
||||||
|
|
||||||
autoload :BuiltinTypes, 'coderay/scanners/java/builtin_types'
|
|
||||||
|
|
||||||
# http://java.sun.com/docs/books/tutorial/java/nutsandbolts/_keywords.html
|
|
||||||
KEYWORDS = %w[
|
|
||||||
assert break case catch continue default do else
|
|
||||||
finally for if instanceof import new package
|
|
||||||
return switch throw try typeof while
|
|
||||||
debugger export
|
|
||||||
] # :nodoc:
|
|
||||||
RESERVED = %w[ const goto ] # :nodoc:
|
|
||||||
CONSTANTS = %w[ false null true ] # :nodoc:
|
|
||||||
MAGIC_VARIABLES = %w[ this super ] # :nodoc:
|
|
||||||
TYPES = %w[
|
|
||||||
boolean byte char class double enum float int interface long
|
|
||||||
short void
|
|
||||||
] << '[]' # :nodoc: because int[] should be highlighted as a type
|
|
||||||
DIRECTIVES = %w[
|
|
||||||
abstract extends final implements native private protected public
|
|
||||||
static strictfp synchronized throws transient volatile
|
|
||||||
] # :nodoc:
|
|
||||||
|
|
||||||
IDENT_KIND = WordList.new(:ident).
|
|
||||||
add(KEYWORDS, :keyword).
|
|
||||||
add(RESERVED, :reserved).
|
|
||||||
add(CONSTANTS, :predefined_constant).
|
|
||||||
add(MAGIC_VARIABLES, :local_variable).
|
|
||||||
add(TYPES, :type).
|
|
||||||
add(BuiltinTypes::List, :predefined_type).
|
|
||||||
add(BuiltinTypes::List.select { |builtin| builtin[/(Error|Exception)$/] }, :exception).
|
|
||||||
add(DIRECTIVES, :directive) # :nodoc:
|
|
||||||
|
|
||||||
ESCAPE = / [bfnrtv\n\\'"] | x[a-fA-F0-9]{1,2} | [0-7]{1,3} /x # :nodoc:
|
|
||||||
UNICODE_ESCAPE = / u[a-fA-F0-9]{4} | U[a-fA-F0-9]{8} /x # :nodoc:
|
|
||||||
STRING_CONTENT_PATTERN = {
|
|
||||||
"'" => /[^\\']+/,
|
|
||||||
'"' => /[^\\"]+/,
|
|
||||||
'/' => /[^\\\/]+/,
|
|
||||||
} # :nodoc:
|
|
||||||
IDENT = /[a-zA-Z_][A-Za-z_0-9]*/ # :nodoc:
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def scan_tokens encoder, options
|
|
||||||
|
|
||||||
state = :initial
|
|
||||||
string_delimiter = nil
|
|
||||||
package_name_expected = false
|
|
||||||
class_name_follows = false
|
|
||||||
last_token_dot = false
|
|
||||||
|
|
||||||
until eos?
|
|
||||||
|
|
||||||
case state
|
|
||||||
|
|
||||||
when :initial
|
|
||||||
|
|
||||||
if match = scan(/ \s+ | \\\n /x)
|
|
||||||
encoder.text_token match, :space
|
|
||||||
next
|
|
||||||
|
|
||||||
elsif match = scan(%r! // [^\n\\]* (?: \\. [^\n\\]* )* | /\* (?: .*? \*/ | .* ) !mx)
|
|
||||||
encoder.text_token match, :comment
|
|
||||||
next
|
|
||||||
|
|
||||||
elsif package_name_expected && match = scan(/ #{IDENT} (?: \. #{IDENT} )* /ox)
|
|
||||||
encoder.text_token match, package_name_expected
|
|
||||||
|
|
||||||
elsif match = scan(/ #{IDENT} | \[\] /ox)
|
|
||||||
kind = IDENT_KIND[match]
|
|
||||||
if last_token_dot
|
|
||||||
kind = :ident
|
|
||||||
elsif class_name_follows
|
|
||||||
kind = :class
|
|
||||||
class_name_follows = false
|
|
||||||
else
|
|
||||||
case match
|
|
||||||
when 'import'
|
|
||||||
package_name_expected = :include
|
|
||||||
when 'package'
|
|
||||||
package_name_expected = :namespace
|
|
||||||
when 'class', 'interface'
|
|
||||||
class_name_follows = true
|
|
||||||
end
|
|
||||||
end
|
|
||||||
encoder.text_token match, kind
|
|
||||||
|
|
||||||
elsif match = scan(/ \.(?!\d) | [,?:()\[\]}] | -- | \+\+ | && | \|\| | \*\*=? | [-+*\/%^~&|<>=!]=? | <<<?=? | >>>?=? /x)
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
|
|
||||||
elsif match = scan(/;/)
|
|
||||||
package_name_expected = false
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
|
|
||||||
elsif match = scan(/\{/)
|
|
||||||
class_name_follows = false
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
|
|
||||||
elsif check(/[\d.]/)
|
|
||||||
if match = scan(/0[xX][0-9A-Fa-f]+/)
|
|
||||||
encoder.text_token match, :hex
|
|
||||||
elsif match = scan(/(?>0[0-7]+)(?![89.eEfF])/)
|
|
||||||
encoder.text_token match, :octal
|
|
||||||
elsif match = scan(/\d+[fFdD]|\d*\.\d+(?:[eE][+-]?\d+)?[fFdD]?|\d+[eE][+-]?\d+[fFdD]?/)
|
|
||||||
encoder.text_token match, :float
|
|
||||||
elsif match = scan(/\d+[lL]?/)
|
|
||||||
encoder.text_token match, :integer
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif match = scan(/["']/)
|
|
||||||
state = :string
|
|
||||||
encoder.begin_group state
|
|
||||||
string_delimiter = match
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
|
|
||||||
elsif match = scan(/ @ #{IDENT} /ox)
|
|
||||||
encoder.text_token match, :annotation
|
|
||||||
|
|
||||||
else
|
|
||||||
encoder.text_token getch, :error
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
when :string
|
|
||||||
if match = scan(STRING_CONTENT_PATTERN[string_delimiter])
|
|
||||||
encoder.text_token match, :content
|
|
||||||
elsif match = scan(/["'\/]/)
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
encoder.end_group state
|
|
||||||
state = :initial
|
|
||||||
string_delimiter = nil
|
|
||||||
elsif state == :string && (match = scan(/ \\ (?: #{ESCAPE} | #{UNICODE_ESCAPE} ) /mox))
|
|
||||||
if string_delimiter == "'" && !(match == "\\\\" || match == "\\'")
|
|
||||||
encoder.text_token match, :content
|
|
||||||
else
|
|
||||||
encoder.text_token match, :char
|
|
||||||
end
|
|
||||||
elsif match = scan(/\\./m)
|
|
||||||
encoder.text_token match, :content
|
|
||||||
elsif match = scan(/ \\ | $ /x)
|
|
||||||
encoder.end_group state
|
|
||||||
state = :initial
|
|
||||||
encoder.text_token match, :error
|
|
||||||
else
|
|
||||||
raise_inspect "else case \" reached; %p not handled." % peek(1), encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
else
|
|
||||||
raise_inspect 'Unknown state', encoder
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
last_token_dot = match == '.'
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
if state == :string
|
|
||||||
encoder.end_group state
|
|
||||||
end
|
|
||||||
|
|
||||||
encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,421 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
module Java::BuiltinTypes # :nodoc:
|
|
||||||
|
|
||||||
#:nocov:
|
|
||||||
List = %w[
|
|
||||||
AbstractAction AbstractBorder AbstractButton AbstractCellEditor AbstractCollection
|
|
||||||
AbstractColorChooserPanel AbstractDocument AbstractExecutorService AbstractInterruptibleChannel
|
|
||||||
AbstractLayoutCache AbstractList AbstractListModel AbstractMap AbstractMethodError AbstractPreferences
|
|
||||||
AbstractQueue AbstractQueuedSynchronizer AbstractSelectableChannel AbstractSelectionKey AbstractSelector
|
|
||||||
AbstractSequentialList AbstractSet AbstractSpinnerModel AbstractTableModel AbstractUndoableEdit
|
|
||||||
AbstractWriter AccessControlContext AccessControlException AccessController AccessException Accessible
|
|
||||||
AccessibleAction AccessibleAttributeSequence AccessibleBundle AccessibleComponent AccessibleContext
|
|
||||||
AccessibleEditableText AccessibleExtendedComponent AccessibleExtendedTable AccessibleExtendedText
|
|
||||||
AccessibleHyperlink AccessibleHypertext AccessibleIcon AccessibleKeyBinding AccessibleObject
|
|
||||||
AccessibleRelation AccessibleRelationSet AccessibleResourceBundle AccessibleRole AccessibleSelection
|
|
||||||
AccessibleState AccessibleStateSet AccessibleStreamable AccessibleTable AccessibleTableModelChange
|
|
||||||
AccessibleText AccessibleTextSequence AccessibleValue AccountException AccountExpiredException
|
|
||||||
AccountLockedException AccountNotFoundException Acl AclEntry AclNotFoundException Action ActionEvent
|
|
||||||
ActionListener ActionMap ActionMapUIResource Activatable ActivateFailedException ActivationDesc
|
|
||||||
ActivationException ActivationGroup ActivationGroupDesc ActivationGroupID ActivationGroup_Stub
|
|
||||||
ActivationID ActivationInstantiator ActivationMonitor ActivationSystem Activator ActiveEvent
|
|
||||||
ActivityCompletedException ActivityRequiredException Adjustable AdjustmentEvent AdjustmentListener
|
|
||||||
Adler32 AffineTransform AffineTransformOp AlgorithmParameterGenerator AlgorithmParameterGeneratorSpi
|
|
||||||
AlgorithmParameters AlgorithmParameterSpec AlgorithmParametersSpi AllPermission AlphaComposite
|
|
||||||
AlreadyBoundException AlreadyConnectedException AncestorEvent AncestorListener AnnotatedElement
|
|
||||||
Annotation AnnotationFormatError AnnotationTypeMismatchException AppConfigurationEntry Appendable Applet
|
|
||||||
AppletContext AppletInitializer AppletStub Arc2D Area AreaAveragingScaleFilter ArithmeticException Array
|
|
||||||
ArrayBlockingQueue ArrayIndexOutOfBoundsException ArrayList Arrays ArrayStoreException ArrayType
|
|
||||||
AssertionError AsyncBoxView AsynchronousCloseException AtomicBoolean AtomicInteger AtomicIntegerArray
|
|
||||||
AtomicIntegerFieldUpdater AtomicLong AtomicLongArray AtomicLongFieldUpdater AtomicMarkableReference
|
|
||||||
AtomicReference AtomicReferenceArray AtomicReferenceFieldUpdater AtomicStampedReference Attribute
|
|
||||||
AttributeChangeNotification AttributeChangeNotificationFilter AttributedCharacterIterator
|
|
||||||
AttributedString AttributeException AttributeInUseException AttributeList AttributeModificationException
|
|
||||||
AttributeNotFoundException Attributes AttributeSet AttributeSetUtilities AttributeValueExp AudioClip
|
|
||||||
AudioFileFormat AudioFileReader AudioFileWriter AudioFormat AudioInputStream AudioPermission AudioSystem
|
|
||||||
AuthenticationException AuthenticationNotSupportedException Authenticator AuthorizeCallback
|
|
||||||
AuthPermission AuthProvider Autoscroll AWTError AWTEvent AWTEventListener AWTEventListenerProxy
|
|
||||||
AWTEventMulticaster AWTException AWTKeyStroke AWTPermission BackingStoreException
|
|
||||||
BadAttributeValueExpException BadBinaryOpValueExpException BadLocationException BadPaddingException
|
|
||||||
BadStringOperationException BandCombineOp BandedSampleModel BaseRowSet BasicArrowButton BasicAttribute
|
|
||||||
BasicAttributes BasicBorders BasicButtonListener BasicButtonUI BasicCheckBoxMenuItemUI BasicCheckBoxUI
|
|
||||||
BasicColorChooserUI BasicComboBoxEditor BasicComboBoxRenderer BasicComboBoxUI BasicComboPopup
|
|
||||||
BasicControl BasicDesktopIconUI BasicDesktopPaneUI BasicDirectoryModel BasicEditorPaneUI
|
|
||||||
BasicFileChooserUI BasicFormattedTextFieldUI BasicGraphicsUtils BasicHTML BasicIconFactory
|
|
||||||
BasicInternalFrameTitlePane BasicInternalFrameUI BasicLabelUI BasicListUI BasicLookAndFeel
|
|
||||||
BasicMenuBarUI BasicMenuItemUI BasicMenuUI BasicOptionPaneUI BasicPanelUI BasicPasswordFieldUI
|
|
||||||
BasicPermission BasicPopupMenuSeparatorUI BasicPopupMenuUI BasicProgressBarUI BasicRadioButtonMenuItemUI
|
|
||||||
BasicRadioButtonUI BasicRootPaneUI BasicScrollBarUI BasicScrollPaneUI BasicSeparatorUI BasicSliderUI
|
|
||||||
BasicSpinnerUI BasicSplitPaneDivider BasicSplitPaneUI BasicStroke BasicTabbedPaneUI BasicTableHeaderUI
|
|
||||||
BasicTableUI BasicTextAreaUI BasicTextFieldUI BasicTextPaneUI BasicTextUI BasicToggleButtonUI
|
|
||||||
BasicToolBarSeparatorUI BasicToolBarUI BasicToolTipUI BasicTreeUI BasicViewportUI BatchUpdateException
|
|
||||||
BeanContext BeanContextChild BeanContextChildComponentProxy BeanContextChildSupport
|
|
||||||
BeanContextContainerProxy BeanContextEvent BeanContextMembershipEvent BeanContextMembershipListener
|
|
||||||
BeanContextProxy BeanContextServiceAvailableEvent BeanContextServiceProvider
|
|
||||||
BeanContextServiceProviderBeanInfo BeanContextServiceRevokedEvent BeanContextServiceRevokedListener
|
|
||||||
BeanContextServices BeanContextServicesListener BeanContextServicesSupport BeanContextSupport
|
|
||||||
BeanDescriptor BeanInfo Beans BevelBorder Bidi BigDecimal BigInteger BinaryRefAddr BindException Binding
|
|
||||||
BitSet Blob BlockingQueue BlockView BMPImageWriteParam Book Boolean BooleanControl Border BorderFactory
|
|
||||||
BorderLayout BorderUIResource BoundedRangeModel Box BoxLayout BoxView BreakIterator
|
|
||||||
BrokenBarrierException Buffer BufferCapabilities BufferedImage BufferedImageFilter BufferedImageOp
|
|
||||||
BufferedInputStream BufferedOutputStream BufferedReader BufferedWriter BufferOverflowException
|
|
||||||
BufferStrategy BufferUnderflowException Button ButtonGroup ButtonModel ButtonUI Byte
|
|
||||||
ByteArrayInputStream ByteArrayOutputStream ByteBuffer ByteChannel ByteLookupTable ByteOrder CachedRowSet
|
|
||||||
CacheRequest CacheResponse Calendar Callable CallableStatement Callback CallbackHandler
|
|
||||||
CancelablePrintJob CancellationException CancelledKeyException CannotProceedException
|
|
||||||
CannotRedoException CannotUndoException Canvas CardLayout Caret CaretEvent CaretListener CellEditor
|
|
||||||
CellEditorListener CellRendererPane Certificate CertificateEncodingException CertificateException
|
|
||||||
CertificateExpiredException CertificateFactory CertificateFactorySpi CertificateNotYetValidException
|
|
||||||
CertificateParsingException CertPath CertPathBuilder CertPathBuilderException CertPathBuilderResult
|
|
||||||
CertPathBuilderSpi CertPathParameters CertPathTrustManagerParameters CertPathValidator
|
|
||||||
CertPathValidatorException CertPathValidatorResult CertPathValidatorSpi CertSelector CertStore
|
|
||||||
CertStoreException CertStoreParameters CertStoreSpi ChangedCharSetException ChangeEvent ChangeListener
|
|
||||||
Channel Channels Character CharacterCodingException CharacterIterator CharArrayReader CharArrayWriter
|
|
||||||
CharBuffer CharConversionException CharSequence Charset CharsetDecoder CharsetEncoder CharsetProvider
|
|
||||||
Checkbox CheckboxGroup CheckboxMenuItem CheckedInputStream CheckedOutputStream Checksum Choice
|
|
||||||
ChoiceCallback ChoiceFormat Chromaticity Cipher CipherInputStream CipherOutputStream CipherSpi Class
|
|
||||||
ClassCastException ClassCircularityError ClassDefinition ClassDesc ClassFileTransformer ClassFormatError
|
|
||||||
ClassLoader ClassLoaderRepository ClassLoadingMXBean ClassNotFoundException Clip Clipboard
|
|
||||||
ClipboardOwner Clob Cloneable CloneNotSupportedException Closeable ClosedByInterruptException
|
|
||||||
ClosedChannelException ClosedSelectorException CMMException CoderMalfunctionError CoderResult CodeSigner
|
|
||||||
CodeSource CodingErrorAction CollationElementIterator CollationKey Collator Collection
|
|
||||||
CollectionCertStoreParameters Collections Color ColorChooserComponentFactory ColorChooserUI
|
|
||||||
ColorConvertOp ColorModel ColorSelectionModel ColorSpace ColorSupported ColorType ColorUIResource
|
|
||||||
ComboBoxEditor ComboBoxModel ComboBoxUI ComboPopup CommunicationException Comparable Comparator
|
|
||||||
CompilationMXBean Compiler CompletionService Component ComponentAdapter ComponentColorModel
|
|
||||||
ComponentEvent ComponentInputMap ComponentInputMapUIResource ComponentListener ComponentOrientation
|
|
||||||
ComponentSampleModel ComponentUI ComponentView Composite CompositeContext CompositeData
|
|
||||||
CompositeDataSupport CompositeName CompositeType CompositeView CompoundBorder CompoundControl
|
|
||||||
CompoundEdit CompoundName Compression ConcurrentHashMap ConcurrentLinkedQueue ConcurrentMap
|
|
||||||
ConcurrentModificationException Condition Configuration ConfigurationException ConfirmationCallback
|
|
||||||
ConnectException ConnectIOException Connection ConnectionEvent ConnectionEventListener
|
|
||||||
ConnectionPendingException ConnectionPoolDataSource ConsoleHandler Constructor Container
|
|
||||||
ContainerAdapter ContainerEvent ContainerListener ContainerOrderFocusTraversalPolicy ContentHandler
|
|
||||||
ContentHandlerFactory ContentModel Context ContextNotEmptyException ContextualRenderedImageFactory
|
|
||||||
Control ControlFactory ControllerEventListener ConvolveOp CookieHandler Copies CopiesSupported
|
|
||||||
CopyOnWriteArrayList CopyOnWriteArraySet CountDownLatch CounterMonitor CounterMonitorMBean CRC32
|
|
||||||
CredentialException CredentialExpiredException CredentialNotFoundException CRL CRLException CRLSelector
|
|
||||||
CropImageFilter CSS CubicCurve2D Currency Cursor Customizer CyclicBarrier DatabaseMetaData DataBuffer
|
|
||||||
DataBufferByte DataBufferDouble DataBufferFloat DataBufferInt DataBufferShort DataBufferUShort
|
|
||||||
DataFlavor DataFormatException DatagramChannel DatagramPacket DatagramSocket DatagramSocketImpl
|
|
||||||
DatagramSocketImplFactory DataInput DataInputStream DataLine DataOutput DataOutputStream DataSource
|
|
||||||
DataTruncation DatatypeConfigurationException DatatypeConstants DatatypeFactory Date DateFormat
|
|
||||||
DateFormatSymbols DateFormatter DateTimeAtCompleted DateTimeAtCreation DateTimeAtProcessing
|
|
||||||
DateTimeSyntax DebugGraphics DecimalFormat DecimalFormatSymbols DefaultBoundedRangeModel
|
|
||||||
DefaultButtonModel DefaultCaret DefaultCellEditor DefaultColorSelectionModel DefaultComboBoxModel
|
|
||||||
DefaultDesktopManager DefaultEditorKit DefaultFocusManager DefaultFocusTraversalPolicy DefaultFormatter
|
|
||||||
DefaultFormatterFactory DefaultHighlighter DefaultKeyboardFocusManager DefaultListCellRenderer
|
|
||||||
DefaultListModel DefaultListSelectionModel DefaultLoaderRepository DefaultMenuLayout DefaultMetalTheme
|
|
||||||
DefaultMutableTreeNode DefaultPersistenceDelegate DefaultSingleSelectionModel DefaultStyledDocument
|
|
||||||
DefaultTableCellRenderer DefaultTableColumnModel DefaultTableModel DefaultTextUI DefaultTreeCellEditor
|
|
||||||
DefaultTreeCellRenderer DefaultTreeModel DefaultTreeSelectionModel Deflater DeflaterOutputStream Delayed
|
|
||||||
DelayQueue DelegationPermission Deprecated Descriptor DescriptorAccess DescriptorSupport DESedeKeySpec
|
|
||||||
DesignMode DESKeySpec DesktopIconUI DesktopManager DesktopPaneUI Destination Destroyable
|
|
||||||
DestroyFailedException DGC DHGenParameterSpec DHKey DHParameterSpec DHPrivateKey DHPrivateKeySpec
|
|
||||||
DHPublicKey DHPublicKeySpec Dialog Dictionary DigestException DigestInputStream DigestOutputStream
|
|
||||||
Dimension Dimension2D DimensionUIResource DirContext DirectColorModel DirectoryManager DirObjectFactory
|
|
||||||
DirStateFactory DisplayMode DnDConstants Doc DocAttribute DocAttributeSet DocFlavor DocPrintJob Document
|
|
||||||
DocumentBuilder DocumentBuilderFactory Documented DocumentEvent DocumentFilter DocumentListener
|
|
||||||
DocumentName DocumentParser DomainCombiner DOMLocator DOMResult DOMSource Double DoubleBuffer
|
|
||||||
DragGestureEvent DragGestureListener DragGestureRecognizer DragSource DragSourceAdapter
|
|
||||||
DragSourceContext DragSourceDragEvent DragSourceDropEvent DragSourceEvent DragSourceListener
|
|
||||||
DragSourceMotionListener Driver DriverManager DriverPropertyInfo DropTarget DropTargetAdapter
|
|
||||||
DropTargetContext DropTargetDragEvent DropTargetDropEvent DropTargetEvent DropTargetListener DSAKey
|
|
||||||
DSAKeyPairGenerator DSAParameterSpec DSAParams DSAPrivateKey DSAPrivateKeySpec DSAPublicKey
|
|
||||||
DSAPublicKeySpec DTD DTDConstants DuplicateFormatFlagsException Duration DynamicMBean ECField ECFieldF2m
|
|
||||||
ECFieldFp ECGenParameterSpec ECKey ECParameterSpec ECPoint ECPrivateKey ECPrivateKeySpec ECPublicKey
|
|
||||||
ECPublicKeySpec EditorKit Element ElementIterator ElementType Ellipse2D EllipticCurve EmptyBorder
|
|
||||||
EmptyStackException EncodedKeySpec Encoder EncryptedPrivateKeyInfo Entity Enum
|
|
||||||
EnumConstantNotPresentException EnumControl Enumeration EnumMap EnumSet EnumSyntax EOFException Error
|
|
||||||
ErrorListener ErrorManager EtchedBorder Event EventContext EventDirContext EventHandler EventListener
|
|
||||||
EventListenerList EventListenerProxy EventObject EventQueue EventSetDescriptor Exception
|
|
||||||
ExceptionInInitializerError ExceptionListener Exchanger ExecutionException Executor
|
|
||||||
ExecutorCompletionService Executors ExecutorService ExemptionMechanism ExemptionMechanismException
|
|
||||||
ExemptionMechanismSpi ExpandVetoException ExportException Expression ExtendedRequest ExtendedResponse
|
|
||||||
Externalizable FactoryConfigurationError FailedLoginException FeatureDescriptor Fidelity Field
|
|
||||||
FieldPosition FieldView File FileCacheImageInputStream FileCacheImageOutputStream FileChannel
|
|
||||||
FileChooserUI FileDescriptor FileDialog FileFilter FileHandler FileImageInputStream
|
|
||||||
FileImageOutputStream FileInputStream FileLock FileLockInterruptionException FilenameFilter FileNameMap
|
|
||||||
FileNotFoundException FileOutputStream FilePermission FileReader FileSystemView FileView FileWriter
|
|
||||||
Filter FilteredImageSource FilteredRowSet FilterInputStream FilterOutputStream FilterReader FilterWriter
|
|
||||||
Finishings FixedHeightLayoutCache FlatteningPathIterator FlavorEvent FlavorException FlavorListener
|
|
||||||
FlavorMap FlavorTable Float FloatBuffer FloatControl FlowLayout FlowView Flushable FocusAdapter
|
|
||||||
FocusEvent FocusListener FocusManager FocusTraversalPolicy Font FontFormatException FontMetrics
|
|
||||||
FontRenderContext FontUIResource Format FormatConversionProvider FormatFlagsConversionMismatchException
|
|
||||||
Formattable FormattableFlags Formatter FormatterClosedException FormSubmitEvent FormView Frame Future
|
|
||||||
FutureTask GapContent GarbageCollectorMXBean GatheringByteChannel GaugeMonitor GaugeMonitorMBean
|
|
||||||
GeneralPath GeneralSecurityException GenericArrayType GenericDeclaration GenericSignatureFormatError
|
|
||||||
GlyphJustificationInfo GlyphMetrics GlyphVector GlyphView GradientPaint GraphicAttribute Graphics
|
|
||||||
Graphics2D GraphicsConfigTemplate GraphicsConfiguration GraphicsDevice GraphicsEnvironment GrayFilter
|
|
||||||
GregorianCalendar GridBagConstraints GridBagLayout GridLayout Group Guard GuardedObject GZIPInputStream
|
|
||||||
GZIPOutputStream Handler HandshakeCompletedEvent HandshakeCompletedListener HasControls HashAttributeSet
|
|
||||||
HashDocAttributeSet HashMap HashPrintJobAttributeSet HashPrintRequestAttributeSet
|
|
||||||
HashPrintServiceAttributeSet HashSet Hashtable HeadlessException HierarchyBoundsAdapter
|
|
||||||
HierarchyBoundsListener HierarchyEvent HierarchyListener Highlighter HostnameVerifier HTML HTMLDocument
|
|
||||||
HTMLEditorKit HTMLFrameHyperlinkEvent HTMLWriter HttpRetryException HttpsURLConnection HttpURLConnection
|
|
||||||
HyperlinkEvent HyperlinkListener ICC_ColorSpace ICC_Profile ICC_ProfileGray ICC_ProfileRGB Icon
|
|
||||||
IconUIResource IconView Identity IdentityHashMap IdentityScope IIOByteBuffer IIOException IIOImage
|
|
||||||
IIOInvalidTreeException IIOMetadata IIOMetadataController IIOMetadataFormat IIOMetadataFormatImpl
|
|
||||||
IIOMetadataNode IIOParam IIOParamController IIOReadProgressListener IIOReadUpdateListener
|
|
||||||
IIOReadWarningListener IIORegistry IIOServiceProvider IIOWriteProgressListener IIOWriteWarningListener
|
|
||||||
IllegalAccessError IllegalAccessException IllegalArgumentException IllegalBlockingModeException
|
|
||||||
IllegalBlockSizeException IllegalCharsetNameException IllegalClassFormatException
|
|
||||||
IllegalComponentStateException IllegalFormatCodePointException IllegalFormatConversionException
|
|
||||||
IllegalFormatException IllegalFormatFlagsException IllegalFormatPrecisionException
|
|
||||||
IllegalFormatWidthException IllegalMonitorStateException IllegalPathStateException
|
|
||||||
IllegalSelectorException IllegalStateException IllegalThreadStateException Image ImageCapabilities
|
|
||||||
ImageConsumer ImageFilter ImageGraphicAttribute ImageIcon ImageInputStream ImageInputStreamImpl
|
|
||||||
ImageInputStreamSpi ImageIO ImageObserver ImageOutputStream ImageOutputStreamImpl ImageOutputStreamSpi
|
|
||||||
ImageProducer ImageReader ImageReaderSpi ImageReaderWriterSpi ImageReadParam ImageTranscoder
|
|
||||||
ImageTranscoderSpi ImageTypeSpecifier ImageView ImageWriteParam ImageWriter ImageWriterSpi
|
|
||||||
ImagingOpException IncompatibleClassChangeError IncompleteAnnotationException IndexColorModel
|
|
||||||
IndexedPropertyChangeEvent IndexedPropertyDescriptor IndexOutOfBoundsException Inet4Address Inet6Address
|
|
||||||
InetAddress InetSocketAddress Inflater InflaterInputStream InheritableThreadLocal Inherited
|
|
||||||
InitialContext InitialContextFactory InitialContextFactoryBuilder InitialDirContext InitialLdapContext
|
|
||||||
InlineView InputContext InputEvent InputMap InputMapUIResource InputMethod InputMethodContext
|
|
||||||
InputMethodDescriptor InputMethodEvent InputMethodHighlight InputMethodListener InputMethodRequests
|
|
||||||
InputMismatchException InputStream InputStreamReader InputSubset InputVerifier Insets InsetsUIResource
|
|
||||||
InstanceAlreadyExistsException InstanceNotFoundException InstantiationError InstantiationException
|
|
||||||
Instrument Instrumentation InsufficientResourcesException IntBuffer Integer IntegerSyntax InternalError
|
|
||||||
InternalFrameAdapter InternalFrameEvent InternalFrameFocusTraversalPolicy InternalFrameListener
|
|
||||||
InternalFrameUI InternationalFormatter InterruptedException InterruptedIOException
|
|
||||||
InterruptedNamingException InterruptibleChannel IntrospectionException Introspector
|
|
||||||
InvalidActivityException InvalidAlgorithmParameterException InvalidApplicationException
|
|
||||||
InvalidAttributeIdentifierException InvalidAttributesException InvalidAttributeValueException
|
|
||||||
InvalidClassException InvalidDnDOperationException InvalidKeyException InvalidKeySpecException
|
|
||||||
InvalidMarkException InvalidMidiDataException InvalidNameException InvalidObjectException
|
|
||||||
InvalidOpenTypeException InvalidParameterException InvalidParameterSpecException
|
|
||||||
InvalidPreferencesFormatException InvalidPropertiesFormatException InvalidRelationIdException
|
|
||||||
InvalidRelationServiceException InvalidRelationTypeException InvalidRoleInfoException
|
|
||||||
InvalidRoleValueException InvalidSearchControlsException InvalidSearchFilterException
|
|
||||||
InvalidTargetObjectTypeException InvalidTransactionException InvocationEvent InvocationHandler
|
|
||||||
InvocationTargetException IOException ItemEvent ItemListener ItemSelectable Iterable Iterator
|
|
||||||
IvParameterSpec JApplet JarEntry JarException JarFile JarInputStream JarOutputStream JarURLConnection
|
|
||||||
JButton JCheckBox JCheckBoxMenuItem JColorChooser JComboBox JComponent JdbcRowSet JDesktopPane JDialog
|
|
||||||
JEditorPane JFileChooser JFormattedTextField JFrame JInternalFrame JLabel JLayeredPane JList JMenu
|
|
||||||
JMenuBar JMenuItem JMException JMRuntimeException JMXAuthenticator JMXConnectionNotification
|
|
||||||
JMXConnector JMXConnectorFactory JMXConnectorProvider JMXConnectorServer JMXConnectorServerFactory
|
|
||||||
JMXConnectorServerMBean JMXConnectorServerProvider JMXPrincipal JMXProviderException
|
|
||||||
JMXServerErrorException JMXServiceURL JobAttributes JobHoldUntil JobImpressions JobImpressionsCompleted
|
|
||||||
JobImpressionsSupported JobKOctets JobKOctetsProcessed JobKOctetsSupported JobMediaSheets
|
|
||||||
JobMediaSheetsCompleted JobMediaSheetsSupported JobMessageFromOperator JobName JobOriginatingUserName
|
|
||||||
JobPriority JobPrioritySupported JobSheets JobState JobStateReason JobStateReasons Joinable JoinRowSet
|
|
||||||
JOptionPane JPanel JPasswordField JPEGHuffmanTable JPEGImageReadParam JPEGImageWriteParam JPEGQTable
|
|
||||||
JPopupMenu JProgressBar JRadioButton JRadioButtonMenuItem JRootPane JScrollBar JScrollPane JSeparator
|
|
||||||
JSlider JSpinner JSplitPane JTabbedPane JTable JTableHeader JTextArea JTextComponent JTextField
|
|
||||||
JTextPane JToggleButton JToolBar JToolTip JTree JViewport JWindow KerberosKey KerberosPrincipal
|
|
||||||
KerberosTicket Kernel Key KeyAdapter KeyAgreement KeyAgreementSpi KeyAlreadyExistsException
|
|
||||||
KeyboardFocusManager KeyEvent KeyEventDispatcher KeyEventPostProcessor KeyException KeyFactory
|
|
||||||
KeyFactorySpi KeyGenerator KeyGeneratorSpi KeyListener KeyManagementException KeyManager
|
|
||||||
KeyManagerFactory KeyManagerFactorySpi Keymap KeyPair KeyPairGenerator KeyPairGeneratorSpi KeyRep
|
|
||||||
KeySpec KeyStore KeyStoreBuilderParameters KeyStoreException KeyStoreSpi KeyStroke Label LabelUI
|
|
||||||
LabelView LanguageCallback LastOwnerException LayeredHighlighter LayoutFocusTraversalPolicy
|
|
||||||
LayoutManager LayoutManager2 LayoutQueue LDAPCertStoreParameters LdapContext LdapName
|
|
||||||
LdapReferralException Lease Level LimitExceededException Line Line2D LineBorder LineBreakMeasurer
|
|
||||||
LineEvent LineListener LineMetrics LineNumberInputStream LineNumberReader LineUnavailableException
|
|
||||||
LinkageError LinkedBlockingQueue LinkedHashMap LinkedHashSet LinkedList LinkException LinkLoopException
|
|
||||||
LinkRef List ListCellRenderer ListDataEvent ListDataListener ListenerNotFoundException ListIterator
|
|
||||||
ListModel ListResourceBundle ListSelectionEvent ListSelectionListener ListSelectionModel ListUI ListView
|
|
||||||
LoaderHandler Locale LocateRegistry Lock LockSupport Logger LoggingMXBean LoggingPermission LoginContext
|
|
||||||
LoginException LoginModule LogManager LogRecord LogStream Long LongBuffer LookAndFeel LookupOp
|
|
||||||
LookupTable Mac MacSpi MalformedInputException MalformedLinkException MalformedObjectNameException
|
|
||||||
MalformedParameterizedTypeException MalformedURLException ManagementFactory ManagementPermission
|
|
||||||
ManageReferralControl ManagerFactoryParameters Manifest Map MappedByteBuffer MarshalException
|
|
||||||
MarshalledObject MaskFormatter Matcher MatchResult Math MathContext MatteBorder MBeanAttributeInfo
|
|
||||||
MBeanConstructorInfo MBeanException MBeanFeatureInfo MBeanInfo MBeanNotificationInfo MBeanOperationInfo
|
|
||||||
MBeanParameterInfo MBeanPermission MBeanRegistration MBeanRegistrationException MBeanServer
|
|
||||||
MBeanServerBuilder MBeanServerConnection MBeanServerDelegate MBeanServerDelegateMBean MBeanServerFactory
|
|
||||||
MBeanServerForwarder MBeanServerInvocationHandler MBeanServerNotification MBeanServerNotificationFilter
|
|
||||||
MBeanServerPermission MBeanTrustPermission Media MediaName MediaPrintableArea MediaSize MediaSizeName
|
|
||||||
MediaTracker MediaTray Member MemoryCacheImageInputStream MemoryCacheImageOutputStream MemoryHandler
|
|
||||||
MemoryImageSource MemoryManagerMXBean MemoryMXBean MemoryNotificationInfo MemoryPoolMXBean MemoryType
|
|
||||||
MemoryUsage Menu MenuBar MenuBarUI MenuComponent MenuContainer MenuDragMouseEvent MenuDragMouseListener
|
|
||||||
MenuElement MenuEvent MenuItem MenuItemUI MenuKeyEvent MenuKeyListener MenuListener MenuSelectionManager
|
|
||||||
MenuShortcut MessageDigest MessageDigestSpi MessageFormat MetaEventListener MetalBorders MetalButtonUI
|
|
||||||
MetalCheckBoxIcon MetalCheckBoxUI MetalComboBoxButton MetalComboBoxEditor MetalComboBoxIcon
|
|
||||||
MetalComboBoxUI MetalDesktopIconUI MetalFileChooserUI MetalIconFactory MetalInternalFrameTitlePane
|
|
||||||
MetalInternalFrameUI MetalLabelUI MetalLookAndFeel MetalMenuBarUI MetalPopupMenuSeparatorUI
|
|
||||||
MetalProgressBarUI MetalRadioButtonUI MetalRootPaneUI MetalScrollBarUI MetalScrollButton
|
|
||||||
MetalScrollPaneUI MetalSeparatorUI MetalSliderUI MetalSplitPaneUI MetalTabbedPaneUI MetalTextFieldUI
|
|
||||||
MetalTheme MetalToggleButtonUI MetalToolBarUI MetalToolTipUI MetalTreeUI MetaMessage Method
|
|
||||||
MethodDescriptor MGF1ParameterSpec MidiChannel MidiDevice MidiDeviceProvider MidiEvent MidiFileFormat
|
|
||||||
MidiFileReader MidiFileWriter MidiMessage MidiSystem MidiUnavailableException MimeTypeParseException
|
|
||||||
MinimalHTMLWriter MissingFormatArgumentException MissingFormatWidthException MissingResourceException
|
|
||||||
Mixer MixerProvider MLet MLetMBean ModelMBean ModelMBeanAttributeInfo ModelMBeanConstructorInfo
|
|
||||||
ModelMBeanInfo ModelMBeanInfoSupport ModelMBeanNotificationBroadcaster ModelMBeanNotificationInfo
|
|
||||||
ModelMBeanOperationInfo ModificationItem Modifier Monitor MonitorMBean MonitorNotification
|
|
||||||
MonitorSettingException MouseAdapter MouseDragGestureRecognizer MouseEvent MouseInfo MouseInputAdapter
|
|
||||||
MouseInputListener MouseListener MouseMotionAdapter MouseMotionListener MouseWheelEvent
|
|
||||||
MouseWheelListener MultiButtonUI MulticastSocket MultiColorChooserUI MultiComboBoxUI MultiDesktopIconUI
|
|
||||||
MultiDesktopPaneUI MultiDoc MultiDocPrintJob MultiDocPrintService MultiFileChooserUI
|
|
||||||
MultiInternalFrameUI MultiLabelUI MultiListUI MultiLookAndFeel MultiMenuBarUI MultiMenuItemUI
|
|
||||||
MultiOptionPaneUI MultiPanelUI MultiPixelPackedSampleModel MultipleDocumentHandling MultipleMaster
|
|
||||||
MultiPopupMenuUI MultiProgressBarUI MultiRootPaneUI MultiScrollBarUI MultiScrollPaneUI MultiSeparatorUI
|
|
||||||
MultiSliderUI MultiSpinnerUI MultiSplitPaneUI MultiTabbedPaneUI MultiTableHeaderUI MultiTableUI
|
|
||||||
MultiTextUI MultiToolBarUI MultiToolTipUI MultiTreeUI MultiViewportUI MutableAttributeSet
|
|
||||||
MutableComboBoxModel MutableTreeNode Name NameAlreadyBoundException NameCallback NameClassPair
|
|
||||||
NameNotFoundException NameParser NamespaceChangeListener NamespaceContext Naming NamingEnumeration
|
|
||||||
NamingEvent NamingException NamingExceptionEvent NamingListener NamingManager NamingSecurityException
|
|
||||||
NavigationFilter NegativeArraySizeException NetPermission NetworkInterface NoClassDefFoundError
|
|
||||||
NoConnectionPendingException NodeChangeEvent NodeChangeListener NoInitialContextException
|
|
||||||
NoninvertibleTransformException NonReadableChannelException NonWritableChannelException
|
|
||||||
NoPermissionException NoRouteToHostException NoSuchAlgorithmException NoSuchAttributeException
|
|
||||||
NoSuchElementException NoSuchFieldError NoSuchFieldException NoSuchMethodError NoSuchMethodException
|
|
||||||
NoSuchObjectException NoSuchPaddingException NoSuchProviderException NotActiveException
|
|
||||||
NotBoundException NotCompliantMBeanException NotContextException Notification NotificationBroadcaster
|
|
||||||
NotificationBroadcasterSupport NotificationEmitter NotificationFilter NotificationFilterSupport
|
|
||||||
NotificationListener NotificationResult NotOwnerException NotSerializableException NotYetBoundException
|
|
||||||
NotYetConnectedException NullCipher NullPointerException Number NumberFormat NumberFormatException
|
|
||||||
NumberFormatter NumberOfDocuments NumberOfInterveningJobs NumberUp NumberUpSupported NumericShaper
|
|
||||||
OAEPParameterSpec Object ObjectChangeListener ObjectFactory ObjectFactoryBuilder ObjectInput
|
|
||||||
ObjectInputStream ObjectInputValidation ObjectInstance ObjectName ObjectOutput ObjectOutputStream
|
|
||||||
ObjectStreamClass ObjectStreamConstants ObjectStreamException ObjectStreamField ObjectView ObjID
|
|
||||||
Observable Observer OceanTheme OpenDataException OpenMBeanAttributeInfo OpenMBeanAttributeInfoSupport
|
|
||||||
OpenMBeanConstructorInfo OpenMBeanConstructorInfoSupport OpenMBeanInfo OpenMBeanInfoSupport
|
|
||||||
OpenMBeanOperationInfo OpenMBeanOperationInfoSupport OpenMBeanParameterInfo
|
|
||||||
OpenMBeanParameterInfoSupport OpenType OperatingSystemMXBean Operation OperationNotSupportedException
|
|
||||||
OperationsException Option OptionalDataException OptionPaneUI OrientationRequested OutOfMemoryError
|
|
||||||
OutputDeviceAssigned OutputKeys OutputStream OutputStreamWriter OverlappingFileLockException
|
|
||||||
OverlayLayout Override Owner Pack200 Package PackedColorModel Pageable PageAttributes
|
|
||||||
PagedResultsControl PagedResultsResponseControl PageFormat PageRanges PagesPerMinute PagesPerMinuteColor
|
|
||||||
Paint PaintContext PaintEvent Panel PanelUI Paper ParagraphView ParameterBlock ParameterDescriptor
|
|
||||||
ParameterizedType ParameterMetaData ParseException ParsePosition Parser ParserConfigurationException
|
|
||||||
ParserDelegator PartialResultException PasswordAuthentication PasswordCallback PasswordView Patch
|
|
||||||
PathIterator Pattern PatternSyntaxException PBEKey PBEKeySpec PBEParameterSpec PDLOverrideSupported
|
|
||||||
Permission PermissionCollection Permissions PersistenceDelegate PersistentMBean PhantomReference Pipe
|
|
||||||
PipedInputStream PipedOutputStream PipedReader PipedWriter PixelGrabber PixelInterleavedSampleModel
|
|
||||||
PKCS8EncodedKeySpec PKIXBuilderParameters PKIXCertPathBuilderResult PKIXCertPathChecker
|
|
||||||
PKIXCertPathValidatorResult PKIXParameters PlainDocument PlainView Point Point2D PointerInfo Policy
|
|
||||||
PolicyNode PolicyQualifierInfo Polygon PooledConnection Popup PopupFactory PopupMenu PopupMenuEvent
|
|
||||||
PopupMenuListener PopupMenuUI Port PortableRemoteObject PortableRemoteObjectDelegate
|
|
||||||
PortUnreachableException Position Predicate PreferenceChangeEvent PreferenceChangeListener Preferences
|
|
||||||
PreferencesFactory PreparedStatement PresentationDirection Principal Printable PrinterAbortException
|
|
||||||
PrinterException PrinterGraphics PrinterInfo PrinterIOException PrinterIsAcceptingJobs PrinterJob
|
|
||||||
PrinterLocation PrinterMakeAndModel PrinterMessageFromOperator PrinterMoreInfo
|
|
||||||
PrinterMoreInfoManufacturer PrinterName PrinterResolution PrinterState PrinterStateReason
|
|
||||||
PrinterStateReasons PrinterURI PrintEvent PrintException PrintGraphics PrintJob PrintJobAdapter
|
|
||||||
PrintJobAttribute PrintJobAttributeEvent PrintJobAttributeListener PrintJobAttributeSet PrintJobEvent
|
|
||||||
PrintJobListener PrintQuality PrintRequestAttribute PrintRequestAttributeSet PrintService
|
|
||||||
PrintServiceAttribute PrintServiceAttributeEvent PrintServiceAttributeListener PrintServiceAttributeSet
|
|
||||||
PrintServiceLookup PrintStream PrintWriter PriorityBlockingQueue PriorityQueue PrivateClassLoader
|
|
||||||
PrivateCredentialPermission PrivateKey PrivateMLet PrivilegedAction PrivilegedActionException
|
|
||||||
PrivilegedExceptionAction Process ProcessBuilder ProfileDataException ProgressBarUI ProgressMonitor
|
|
||||||
ProgressMonitorInputStream Properties PropertyChangeEvent PropertyChangeListener
|
|
||||||
PropertyChangeListenerProxy PropertyChangeSupport PropertyDescriptor PropertyEditor
|
|
||||||
PropertyEditorManager PropertyEditorSupport PropertyPermission PropertyResourceBundle
|
|
||||||
PropertyVetoException ProtectionDomain ProtocolException Provider ProviderException Proxy ProxySelector
|
|
||||||
PSource PSSParameterSpec PublicKey PushbackInputStream PushbackReader QName QuadCurve2D Query QueryEval
|
|
||||||
QueryExp Queue QueuedJobCount Random RandomAccess RandomAccessFile Raster RasterFormatException RasterOp
|
|
||||||
RC2ParameterSpec RC5ParameterSpec Rdn Readable ReadableByteChannel Reader ReadOnlyBufferException
|
|
||||||
ReadWriteLock RealmCallback RealmChoiceCallback Receiver Rectangle Rectangle2D RectangularShape
|
|
||||||
ReentrantLock ReentrantReadWriteLock Ref RefAddr Reference Referenceable ReferenceQueue
|
|
||||||
ReferenceUriSchemesSupported ReferralException ReflectionException ReflectPermission Refreshable
|
|
||||||
RefreshFailedException Region RegisterableService Registry RegistryHandler RejectedExecutionException
|
|
||||||
RejectedExecutionHandler Relation RelationException RelationNotFoundException RelationNotification
|
|
||||||
RelationService RelationServiceMBean RelationServiceNotRegisteredException RelationSupport
|
|
||||||
RelationSupportMBean RelationType RelationTypeNotFoundException RelationTypeSupport Remote RemoteCall
|
|
||||||
RemoteException RemoteObject RemoteObjectInvocationHandler RemoteRef RemoteServer RemoteStub
|
|
||||||
RenderableImage RenderableImageOp RenderableImageProducer RenderContext RenderedImage
|
|
||||||
RenderedImageFactory Renderer RenderingHints RepaintManager ReplicateScaleFilter RequestingUserName
|
|
||||||
RequiredModelMBean RescaleOp ResolutionSyntax Resolver ResolveResult ResourceBundle ResponseCache Result
|
|
||||||
ResultSet ResultSetMetaData Retention RetentionPolicy ReverbType RGBImageFilter RMIClassLoader
|
|
||||||
RMIClassLoaderSpi RMIClientSocketFactory RMIConnection RMIConnectionImpl RMIConnectionImpl_Stub
|
|
||||||
RMIConnector RMIConnectorServer RMIFailureHandler RMIIIOPServerImpl RMIJRMPServerImpl
|
|
||||||
RMISecurityException RMISecurityManager RMIServer RMIServerImpl RMIServerImpl_Stub
|
|
||||||
RMIServerSocketFactory RMISocketFactory Robot Role RoleInfo RoleInfoNotFoundException RoleList
|
|
||||||
RoleNotFoundException RoleResult RoleStatus RoleUnresolved RoleUnresolvedList RootPaneContainer
|
|
||||||
RootPaneUI RoundingMode RoundRectangle2D RowMapper RowSet RowSetEvent RowSetInternal RowSetListener
|
|
||||||
RowSetMetaData RowSetMetaDataImpl RowSetReader RowSetWarning RowSetWriter RSAKey RSAKeyGenParameterSpec
|
|
||||||
RSAMultiPrimePrivateCrtKey RSAMultiPrimePrivateCrtKeySpec RSAOtherPrimeInfo RSAPrivateCrtKey
|
|
||||||
RSAPrivateCrtKeySpec RSAPrivateKey RSAPrivateKeySpec RSAPublicKey RSAPublicKeySpec RTFEditorKit
|
|
||||||
RuleBasedCollator Runnable Runtime RuntimeErrorException RuntimeException RuntimeMBeanException
|
|
||||||
RuntimeMXBean RuntimeOperationsException RuntimePermission SampleModel Sasl SaslClient SaslClientFactory
|
|
||||||
SaslException SaslServer SaslServerFactory Savepoint SAXParser SAXParserFactory SAXResult SAXSource
|
|
||||||
SAXTransformerFactory Scanner ScatteringByteChannel ScheduledExecutorService ScheduledFuture
|
|
||||||
ScheduledThreadPoolExecutor Schema SchemaFactory SchemaFactoryLoader SchemaViolationException Scrollable
|
|
||||||
Scrollbar ScrollBarUI ScrollPane ScrollPaneAdjustable ScrollPaneConstants ScrollPaneLayout ScrollPaneUI
|
|
||||||
SealedObject SearchControls SearchResult SecretKey SecretKeyFactory SecretKeyFactorySpi SecretKeySpec
|
|
||||||
SecureCacheResponse SecureClassLoader SecureRandom SecureRandomSpi Security SecurityException
|
|
||||||
SecurityManager SecurityPermission Segment SelectableChannel SelectionKey Selector SelectorProvider
|
|
||||||
Semaphore SeparatorUI Sequence SequenceInputStream Sequencer SerialArray SerialBlob SerialClob
|
|
||||||
SerialDatalink SerialException Serializable SerializablePermission SerialJavaObject SerialRef
|
|
||||||
SerialStruct ServerCloneException ServerError ServerException ServerNotActiveException ServerRef
|
|
||||||
ServerRuntimeException ServerSocket ServerSocketChannel ServerSocketFactory ServiceNotFoundException
|
|
||||||
ServicePermission ServiceRegistry ServiceUI ServiceUIFactory ServiceUnavailableException Set
|
|
||||||
SetOfIntegerSyntax Severity Shape ShapeGraphicAttribute SheetCollate Short ShortBuffer
|
|
||||||
ShortBufferException ShortLookupTable ShortMessage Sides Signature SignatureException SignatureSpi
|
|
||||||
SignedObject Signer SimpleAttributeSet SimpleBeanInfo SimpleDateFormat SimpleDoc SimpleFormatter
|
|
||||||
SimpleTimeZone SimpleType SinglePixelPackedSampleModel SingleSelectionModel Size2DSyntax
|
|
||||||
SizeLimitExceededException SizeRequirements SizeSequence Skeleton SkeletonMismatchException
|
|
||||||
SkeletonNotFoundException SliderUI Socket SocketAddress SocketChannel SocketException SocketFactory
|
|
||||||
SocketHandler SocketImpl SocketImplFactory SocketOptions SocketPermission SocketSecurityException
|
|
||||||
SocketTimeoutException SoftBevelBorder SoftReference SortControl SortedMap SortedSet
|
|
||||||
SortingFocusTraversalPolicy SortKey SortResponseControl Soundbank SoundbankReader SoundbankResource
|
|
||||||
Source SourceDataLine SourceLocator SpinnerDateModel SpinnerListModel SpinnerModel SpinnerNumberModel
|
|
||||||
SpinnerUI SplitPaneUI Spring SpringLayout SQLData SQLException SQLInput SQLInputImpl SQLOutput
|
|
||||||
SQLOutputImpl SQLPermission SQLWarning SSLContext SSLContextSpi SSLEngine SSLEngineResult SSLException
|
|
||||||
SSLHandshakeException SSLKeyException SSLPeerUnverifiedException SSLPermission SSLProtocolException
|
|
||||||
SslRMIClientSocketFactory SslRMIServerSocketFactory SSLServerSocket SSLServerSocketFactory SSLSession
|
|
||||||
SSLSessionBindingEvent SSLSessionBindingListener SSLSessionContext SSLSocket SSLSocketFactory Stack
|
|
||||||
StackOverflowError StackTraceElement StandardMBean StartTlsRequest StartTlsResponse StateEdit
|
|
||||||
StateEditable StateFactory Statement StreamCorruptedException StreamHandler StreamPrintService
|
|
||||||
StreamPrintServiceFactory StreamResult StreamSource StreamTokenizer StrictMath String StringBuffer
|
|
||||||
StringBufferInputStream StringBuilder StringCharacterIterator StringContent
|
|
||||||
StringIndexOutOfBoundsException StringMonitor StringMonitorMBean StringReader StringRefAddr
|
|
||||||
StringSelection StringTokenizer StringValueExp StringWriter Stroke Struct Stub StubDelegate
|
|
||||||
StubNotFoundException Style StyleConstants StyleContext StyledDocument StyledEditorKit StyleSheet
|
|
||||||
Subject SubjectDelegationPermission SubjectDomainCombiner SupportedValuesAttribute SuppressWarnings
|
|
||||||
SwingConstants SwingPropertyChangeSupport SwingUtilities SyncFactory SyncFactoryException
|
|
||||||
SyncFailedException SynchronousQueue SyncProvider SyncProviderException SyncResolver SynthConstants
|
|
||||||
SynthContext Synthesizer SynthGraphicsUtils SynthLookAndFeel SynthPainter SynthStyle SynthStyleFactory
|
|
||||||
SysexMessage System SystemColor SystemFlavorMap TabableView TabbedPaneUI TabExpander TableCellEditor
|
|
||||||
TableCellRenderer TableColumn TableColumnModel TableColumnModelEvent TableColumnModelListener
|
|
||||||
TableHeaderUI TableModel TableModelEvent TableModelListener TableUI TableView TabSet TabStop TabularData
|
|
||||||
TabularDataSupport TabularType TagElement Target TargetDataLine TargetedNotification Templates
|
|
||||||
TemplatesHandler TextAction TextArea TextAttribute TextComponent TextEvent TextField TextHitInfo
|
|
||||||
TextInputCallback TextLayout TextListener TextMeasurer TextOutputCallback TextSyntax TextUI TexturePaint
|
|
||||||
Thread ThreadDeath ThreadFactory ThreadGroup ThreadInfo ThreadLocal ThreadMXBean ThreadPoolExecutor
|
|
||||||
Throwable Tie TileObserver Time TimeLimitExceededException TimeoutException Timer
|
|
||||||
TimerAlarmClockNotification TimerMBean TimerNotification TimerTask Timestamp TimeUnit TimeZone
|
|
||||||
TitledBorder ToolBarUI Toolkit ToolTipManager ToolTipUI TooManyListenersException Track
|
|
||||||
TransactionalWriter TransactionRequiredException TransactionRolledbackException Transferable
|
|
||||||
TransferHandler TransformAttribute Transformer TransformerConfigurationException TransformerException
|
|
||||||
TransformerFactory TransformerFactoryConfigurationError TransformerHandler Transmitter Transparency
|
|
||||||
TreeCellEditor TreeCellRenderer TreeExpansionEvent TreeExpansionListener TreeMap TreeModel
|
|
||||||
TreeModelEvent TreeModelListener TreeNode TreePath TreeSelectionEvent TreeSelectionListener
|
|
||||||
TreeSelectionModel TreeSet TreeUI TreeWillExpandListener TrustAnchor TrustManager TrustManagerFactory
|
|
||||||
TrustManagerFactorySpi Type TypeInfoProvider TypeNotPresentException Types TypeVariable UID UIDefaults
|
|
||||||
UIManager UIResource UndeclaredThrowableException UndoableEdit UndoableEditEvent UndoableEditListener
|
|
||||||
UndoableEditSupport UndoManager UnexpectedException UnicastRemoteObject UnknownError
|
|
||||||
UnknownFormatConversionException UnknownFormatFlagsException UnknownGroupException UnknownHostException
|
|
||||||
UnknownObjectException UnknownServiceException UnmappableCharacterException UnmarshalException
|
|
||||||
UnmodifiableClassException UnmodifiableSetException UnrecoverableEntryException
|
|
||||||
UnrecoverableKeyException Unreferenced UnresolvedAddressException UnresolvedPermission
|
|
||||||
UnsatisfiedLinkError UnsolicitedNotification UnsolicitedNotificationEvent
|
|
||||||
UnsolicitedNotificationListener UnsupportedAddressTypeException UnsupportedAudioFileException
|
|
||||||
UnsupportedCallbackException UnsupportedCharsetException UnsupportedClassVersionError
|
|
||||||
UnsupportedEncodingException UnsupportedFlavorException UnsupportedLookAndFeelException
|
|
||||||
UnsupportedOperationException URI URIException URIResolver URISyntax URISyntaxException URL
|
|
||||||
URLClassLoader URLConnection URLDecoder URLEncoder URLStreamHandler URLStreamHandlerFactory
|
|
||||||
UTFDataFormatException Util UtilDelegate Utilities UUID Validator ValidatorHandler ValueExp ValueHandler
|
|
||||||
ValueHandlerMultiFormat VariableHeightLayoutCache Vector VerifyError VetoableChangeListener
|
|
||||||
VetoableChangeListenerProxy VetoableChangeSupport View ViewFactory ViewportLayout ViewportUI
|
|
||||||
VirtualMachineError Visibility VMID VoiceStatus Void VolatileImage WeakHashMap WeakReference WebRowSet
|
|
||||||
WildcardType Window WindowAdapter WindowConstants WindowEvent WindowFocusListener WindowListener
|
|
||||||
WindowStateListener WrappedPlainView WritableByteChannel WritableRaster WritableRenderedImage
|
|
||||||
WriteAbortedException Writer X500Principal X500PrivateCredential X509Certificate X509CertSelector
|
|
||||||
X509CRL X509CRLEntry X509CRLSelector X509EncodedKeySpec X509ExtendedKeyManager X509Extension
|
|
||||||
X509KeyManager X509TrustManager XAConnection XADataSource XAException XAResource Xid XMLConstants
|
|
||||||
XMLDecoder XMLEncoder XMLFormatter XMLGregorianCalendar XMLParseException XmlReader XmlWriter XPath
|
|
||||||
XPathConstants XPathException XPathExpression XPathExpressionException XPathFactory
|
|
||||||
XPathFactoryConfigurationException XPathFunction XPathFunctionException XPathFunctionResolver
|
|
||||||
XPathVariableResolver ZipEntry ZipException ZipFile ZipInputStream ZipOutputStream ZoneView
|
|
||||||
]
|
|
||||||
#:nocov:
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,213 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
# Scanner for JavaScript.
|
|
||||||
#
|
|
||||||
# Aliases: +ecmascript+, +ecma_script+, +javascript+
|
|
||||||
class JavaScript < Scanner
|
|
||||||
|
|
||||||
register_for :java_script
|
|
||||||
file_extension 'js'
|
|
||||||
|
|
||||||
# The actual JavaScript keywords.
|
|
||||||
KEYWORDS = %w[
|
|
||||||
break case catch continue default delete do else
|
|
||||||
finally for function if in instanceof new
|
|
||||||
return switch throw try typeof var void while with
|
|
||||||
] # :nodoc:
|
|
||||||
PREDEFINED_CONSTANTS = %w[
|
|
||||||
false null true undefined NaN Infinity
|
|
||||||
] # :nodoc:
|
|
||||||
|
|
||||||
MAGIC_VARIABLES = %w[ this arguments ] # :nodoc: arguments was introduced in JavaScript 1.4
|
|
||||||
|
|
||||||
KEYWORDS_EXPECTING_VALUE = WordList.new.add %w[
|
|
||||||
case delete in instanceof new return throw typeof with
|
|
||||||
] # :nodoc:
|
|
||||||
|
|
||||||
# Reserved for future use.
|
|
||||||
RESERVED_WORDS = %w[
|
|
||||||
abstract boolean byte char class debugger double enum export extends
|
|
||||||
final float goto implements import int interface long native package
|
|
||||||
private protected public short static super synchronized throws transient
|
|
||||||
volatile
|
|
||||||
] # :nodoc:
|
|
||||||
|
|
||||||
IDENT_KIND = WordList.new(:ident).
|
|
||||||
add(RESERVED_WORDS, :reserved).
|
|
||||||
add(PREDEFINED_CONSTANTS, :predefined_constant).
|
|
||||||
add(MAGIC_VARIABLES, :local_variable).
|
|
||||||
add(KEYWORDS, :keyword) # :nodoc:
|
|
||||||
|
|
||||||
ESCAPE = / [bfnrtv\n\\'"] | x[a-fA-F0-9]{1,2} | [0-7]{1,3} /x # :nodoc:
|
|
||||||
UNICODE_ESCAPE = / u[a-fA-F0-9]{4} | U[a-fA-F0-9]{8} /x # :nodoc:
|
|
||||||
REGEXP_ESCAPE = / [bBdDsSwW] /x # :nodoc:
|
|
||||||
STRING_CONTENT_PATTERN = {
|
|
||||||
"'" => /[^\\']+/,
|
|
||||||
'"' => /[^\\"]+/,
|
|
||||||
'/' => /[^\\\/]+/,
|
|
||||||
} # :nodoc:
|
|
||||||
KEY_CHECK_PATTERN = {
|
|
||||||
"'" => / (?> [^\\']* (?: \\. [^\\']* )* ) ' \s* : /mx,
|
|
||||||
'"' => / (?> [^\\"]* (?: \\. [^\\"]* )* ) " \s* : /mx,
|
|
||||||
} # :nodoc:
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def scan_tokens encoder, options
|
|
||||||
|
|
||||||
state = :initial
|
|
||||||
string_delimiter = nil
|
|
||||||
value_expected = true
|
|
||||||
key_expected = false
|
|
||||||
function_expected = false
|
|
||||||
|
|
||||||
until eos?
|
|
||||||
|
|
||||||
case state
|
|
||||||
|
|
||||||
when :initial
|
|
||||||
|
|
||||||
if match = scan(/ \s+ | \\\n /x)
|
|
||||||
value_expected = true if !value_expected && match.index(?\n)
|
|
||||||
encoder.text_token match, :space
|
|
||||||
|
|
||||||
elsif match = scan(%r! // [^\n\\]* (?: \\. [^\n\\]* )* | /\* (?: .*? \*/ | .* ) !mx)
|
|
||||||
value_expected = true
|
|
||||||
encoder.text_token match, :comment
|
|
||||||
|
|
||||||
elsif check(/\.?\d/)
|
|
||||||
key_expected = value_expected = false
|
|
||||||
if match = scan(/0[xX][0-9A-Fa-f]+/)
|
|
||||||
encoder.text_token match, :hex
|
|
||||||
elsif match = scan(/(?>0[0-7]+)(?![89.eEfF])/)
|
|
||||||
encoder.text_token match, :octal
|
|
||||||
elsif match = scan(/\d+[fF]|\d*\.\d+(?:[eE][+-]?\d+)?[fF]?|\d+[eE][+-]?\d+[fF]?/)
|
|
||||||
encoder.text_token match, :float
|
|
||||||
elsif match = scan(/\d+/)
|
|
||||||
encoder.text_token match, :integer
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif value_expected && match = scan(/<([[:alpha:]]\w*) (?: [^\/>]*\/> | .*?<\/\1>)/xim)
|
|
||||||
# TODO: scan over nested tags
|
|
||||||
xml_scanner.tokenize match, :tokens => encoder
|
|
||||||
value_expected = false
|
|
||||||
next
|
|
||||||
|
|
||||||
elsif match = scan(/ [-+*=<>?:;,!&^|(\[{~%]+ | \.(?!\d) /x)
|
|
||||||
value_expected = true
|
|
||||||
last_operator = match[-1]
|
|
||||||
key_expected = (last_operator == ?{) || (last_operator == ?,)
|
|
||||||
function_expected = false
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
|
|
||||||
elsif match = scan(/ [)\]}]+ /x)
|
|
||||||
function_expected = key_expected = value_expected = false
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
|
|
||||||
elsif match = scan(/ [$a-zA-Z_][A-Za-z_0-9$]* /x)
|
|
||||||
kind = IDENT_KIND[match]
|
|
||||||
value_expected = (kind == :keyword) && KEYWORDS_EXPECTING_VALUE[match]
|
|
||||||
# TODO: labels
|
|
||||||
if kind == :ident
|
|
||||||
if match.index(?$) # $ allowed inside an identifier
|
|
||||||
kind = :predefined
|
|
||||||
elsif function_expected
|
|
||||||
kind = :function
|
|
||||||
elsif check(/\s*[=:]\s*function\b/)
|
|
||||||
kind = :function
|
|
||||||
elsif key_expected && check(/\s*:/)
|
|
||||||
kind = :key
|
|
||||||
end
|
|
||||||
end
|
|
||||||
function_expected = (kind == :keyword) && (match == 'function')
|
|
||||||
key_expected = false
|
|
||||||
encoder.text_token match, kind
|
|
||||||
|
|
||||||
elsif match = scan(/["']/)
|
|
||||||
if key_expected && check(KEY_CHECK_PATTERN[match])
|
|
||||||
state = :key
|
|
||||||
else
|
|
||||||
state = :string
|
|
||||||
end
|
|
||||||
encoder.begin_group state
|
|
||||||
string_delimiter = match
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
|
|
||||||
elsif value_expected && (match = scan(/\//))
|
|
||||||
encoder.begin_group :regexp
|
|
||||||
state = :regexp
|
|
||||||
string_delimiter = '/'
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
|
|
||||||
elsif match = scan(/ \/ /x)
|
|
||||||
value_expected = true
|
|
||||||
key_expected = false
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
|
|
||||||
else
|
|
||||||
encoder.text_token getch, :error
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
when :string, :regexp, :key
|
|
||||||
if match = scan(STRING_CONTENT_PATTERN[string_delimiter])
|
|
||||||
encoder.text_token match, :content
|
|
||||||
elsif match = scan(/["'\/]/)
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
if state == :regexp
|
|
||||||
modifiers = scan(/[gim]+/)
|
|
||||||
encoder.text_token modifiers, :modifier if modifiers && !modifiers.empty?
|
|
||||||
end
|
|
||||||
encoder.end_group state
|
|
||||||
string_delimiter = nil
|
|
||||||
key_expected = value_expected = false
|
|
||||||
state = :initial
|
|
||||||
elsif state != :regexp && (match = scan(/ \\ (?: #{ESCAPE} | #{UNICODE_ESCAPE} ) /mox))
|
|
||||||
if string_delimiter == "'" && !(match == "\\\\" || match == "\\'")
|
|
||||||
encoder.text_token match, :content
|
|
||||||
else
|
|
||||||
encoder.text_token match, :char
|
|
||||||
end
|
|
||||||
elsif state == :regexp && match = scan(/ \\ (?: #{ESCAPE} | #{REGEXP_ESCAPE} | #{UNICODE_ESCAPE} ) /mox)
|
|
||||||
encoder.text_token match, :char
|
|
||||||
elsif match = scan(/\\./m)
|
|
||||||
encoder.text_token match, :content
|
|
||||||
elsif match = scan(/ \\ | $ /x)
|
|
||||||
encoder.end_group state
|
|
||||||
encoder.text_token match, :error
|
|
||||||
key_expected = value_expected = false
|
|
||||||
state = :initial
|
|
||||||
else
|
|
||||||
raise_inspect "else case \" reached; %p not handled." % peek(1), encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
else
|
|
||||||
raise_inspect 'Unknown state', encoder
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
if [:string, :regexp].include? state
|
|
||||||
encoder.end_group state
|
|
||||||
end
|
|
||||||
|
|
||||||
encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def reset_instance
|
|
||||||
super
|
|
||||||
@xml_scanner.reset if defined? @xml_scanner
|
|
||||||
end
|
|
||||||
|
|
||||||
def xml_scanner
|
|
||||||
@xml_scanner ||= CodeRay.scanner :xml, :tokens => @tokens, :keep_tokens => true, :keep_state => false
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,95 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
# Scanner for JSON (JavaScript Object Notation).
|
|
||||||
class JSON < Scanner
|
|
||||||
|
|
||||||
register_for :json
|
|
||||||
file_extension 'json'
|
|
||||||
|
|
||||||
KINDS_NOT_LOC = [
|
|
||||||
:float, :char, :content, :delimiter,
|
|
||||||
:error, :integer, :operator, :value,
|
|
||||||
] # :nodoc:
|
|
||||||
|
|
||||||
ESCAPE = / [bfnrt\\"\/] /x # :nodoc:
|
|
||||||
UNICODE_ESCAPE = / u[a-fA-F0-9]{4} /x # :nodoc:
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
# See http://json.org/ for a definition of the JSON lexic/grammar.
|
|
||||||
def scan_tokens encoder, options
|
|
||||||
|
|
||||||
state = :initial
|
|
||||||
stack = []
|
|
||||||
key_expected = false
|
|
||||||
|
|
||||||
until eos?
|
|
||||||
|
|
||||||
case state
|
|
||||||
|
|
||||||
when :initial
|
|
||||||
if match = scan(/ \s+ /x)
|
|
||||||
encoder.text_token match, :space
|
|
||||||
elsif match = scan(/"/)
|
|
||||||
state = key_expected ? :key : :string
|
|
||||||
encoder.begin_group state
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
elsif match = scan(/ [:,\[{\]}] /x)
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
case match
|
|
||||||
when ':' then key_expected = false
|
|
||||||
when ',' then key_expected = true if stack.last == :object
|
|
||||||
when '{' then stack << :object; key_expected = true
|
|
||||||
when '[' then stack << :array
|
|
||||||
when '}', ']' then stack.pop # no error recovery, but works for valid JSON
|
|
||||||
end
|
|
||||||
elsif match = scan(/ true | false | null /x)
|
|
||||||
encoder.text_token match, :value
|
|
||||||
elsif match = scan(/ -? (?: 0 | [1-9]\d* ) /x)
|
|
||||||
if scan(/ \.\d+ (?:[eE][-+]?\d+)? | [eE][-+]? \d+ /x)
|
|
||||||
match << matched
|
|
||||||
encoder.text_token match, :float
|
|
||||||
else
|
|
||||||
encoder.text_token match, :integer
|
|
||||||
end
|
|
||||||
else
|
|
||||||
encoder.text_token getch, :error
|
|
||||||
end
|
|
||||||
|
|
||||||
when :string, :key
|
|
||||||
if match = scan(/[^\\"]+/)
|
|
||||||
encoder.text_token match, :content
|
|
||||||
elsif match = scan(/"/)
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
encoder.end_group state
|
|
||||||
state = :initial
|
|
||||||
elsif match = scan(/ \\ (?: #{ESCAPE} | #{UNICODE_ESCAPE} ) /mox)
|
|
||||||
encoder.text_token match, :char
|
|
||||||
elsif match = scan(/\\./m)
|
|
||||||
encoder.text_token match, :content
|
|
||||||
elsif match = scan(/ \\ | $ /x)
|
|
||||||
encoder.end_group state
|
|
||||||
encoder.text_token match, :error
|
|
||||||
state = :initial
|
|
||||||
else
|
|
||||||
raise_inspect "else case \" reached; %p not handled." % peek(1), encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
else
|
|
||||||
raise_inspect 'Unknown state: %p' % [state], encoder
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
if [:string, :key].include? state
|
|
||||||
encoder.end_group state
|
|
||||||
end
|
|
||||||
|
|
||||||
encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,509 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
load :html
|
|
||||||
|
|
||||||
# Scanner for PHP.
|
|
||||||
#
|
|
||||||
# Original by Stefan Walk.
|
|
||||||
class PHP < Scanner
|
|
||||||
|
|
||||||
register_for :php
|
|
||||||
file_extension 'php'
|
|
||||||
encoding 'BINARY'
|
|
||||||
|
|
||||||
KINDS_NOT_LOC = HTML::KINDS_NOT_LOC
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def setup
|
|
||||||
@html_scanner = CodeRay.scanner :html, :tokens => @tokens, :keep_tokens => true, :keep_state => true
|
|
||||||
end
|
|
||||||
|
|
||||||
def reset_instance
|
|
||||||
super
|
|
||||||
@html_scanner.reset
|
|
||||||
end
|
|
||||||
|
|
||||||
module Words # :nodoc:
|
|
||||||
|
|
||||||
# according to http://www.php.net/manual/en/reserved.keywords.php
|
|
||||||
KEYWORDS = %w[
|
|
||||||
abstract and array as break case catch class clone const continue declare default do else elseif
|
|
||||||
enddeclare endfor endforeach endif endswitch endwhile extends final for foreach function global
|
|
||||||
goto if implements interface instanceof namespace new or private protected public static switch
|
|
||||||
throw try use var while xor
|
|
||||||
cfunction old_function
|
|
||||||
]
|
|
||||||
|
|
||||||
TYPES = %w[ int integer float double bool boolean string array object resource ]
|
|
||||||
|
|
||||||
LANGUAGE_CONSTRUCTS = %w[
|
|
||||||
die echo empty exit eval include include_once isset list
|
|
||||||
require require_once return print unset
|
|
||||||
]
|
|
||||||
|
|
||||||
CLASSES = %w[ Directory stdClass __PHP_Incomplete_Class exception php_user_filter Closure ]
|
|
||||||
|
|
||||||
# according to http://php.net/quickref.php on 2009-04-21;
|
|
||||||
# all functions with _ excluded (module functions) and selected additional functions
|
|
||||||
BUILTIN_FUNCTIONS = %w[
|
|
||||||
abs acos acosh addcslashes addslashes aggregate array arsort ascii2ebcdic asin asinh asort assert atan atan2
|
|
||||||
atanh basename bcadd bccomp bcdiv bcmod bcmul bcpow bcpowmod bcscale bcsqrt bcsub bin2hex bindec
|
|
||||||
bindtextdomain bzclose bzcompress bzdecompress bzerrno bzerror bzerrstr bzflush bzopen bzread bzwrite
|
|
||||||
calculhmac ceil chdir checkdate checkdnsrr chgrp chmod chop chown chr chroot clearstatcache closedir closelog
|
|
||||||
compact constant copy cos cosh count crc32 crypt current date dcgettext dcngettext deaggregate decbin dechex
|
|
||||||
decoct define defined deg2rad delete dgettext die dirname diskfreespace dl dngettext doubleval each
|
|
||||||
ebcdic2ascii echo empty end ereg eregi escapeshellarg escapeshellcmd eval exec exit exp explode expm1 extract
|
|
||||||
fclose feof fflush fgetc fgetcsv fgets fgetss file fileatime filectime filegroup fileinode filemtime fileowner
|
|
||||||
fileperms filepro filesize filetype floatval flock floor flush fmod fnmatch fopen fpassthru fprintf fputcsv
|
|
||||||
fputs fread frenchtojd fscanf fseek fsockopen fstat ftell ftok ftruncate fwrite getallheaders getcwd getdate
|
|
||||||
getenv gethostbyaddr gethostbyname gethostbynamel getimagesize getlastmod getmxrr getmygid getmyinode getmypid
|
|
||||||
getmyuid getopt getprotobyname getprotobynumber getrandmax getrusage getservbyname getservbyport gettext
|
|
||||||
gettimeofday gettype glob gmdate gmmktime gmstrftime gregoriantojd gzclose gzcompress gzdecode gzdeflate
|
|
||||||
gzencode gzeof gzfile gzgetc gzgets gzgetss gzinflate gzopen gzpassthru gzputs gzread gzrewind gzseek gztell
|
|
||||||
gzuncompress gzwrite hash header hebrev hebrevc hexdec htmlentities htmlspecialchars hypot iconv idate
|
|
||||||
implode include intval ip2long iptcembed iptcparse isset
|
|
||||||
jddayofweek jdmonthname jdtofrench jdtogregorian jdtojewish jdtojulian jdtounix jewishtojd join jpeg2wbmp
|
|
||||||
juliantojd key krsort ksort lcfirst lchgrp lchown levenshtein link linkinfo list localeconv localtime log
|
|
||||||
log10 log1p long2ip lstat ltrim mail main max md5 metaphone mhash microtime min mkdir mktime msql natcasesort
|
|
||||||
natsort next ngettext nl2br nthmac octdec opendir openlog
|
|
||||||
ord overload pack passthru pathinfo pclose pfsockopen phpcredits phpinfo phpversion pi png2wbmp popen pos pow
|
|
||||||
prev print printf putenv quotemeta rad2deg rand range rawurldecode rawurlencode readdir readfile readgzfile
|
|
||||||
readline readlink realpath recode rename require reset rewind rewinddir rmdir round rsort rtrim scandir
|
|
||||||
serialize setcookie setlocale setrawcookie settype sha1 shuffle signeurlpaiement sin sinh sizeof sleep snmpget
|
|
||||||
snmpgetnext snmprealwalk snmpset snmpwalk snmpwalkoid sort soundex split spliti sprintf sqrt srand sscanf stat
|
|
||||||
strcasecmp strchr strcmp strcoll strcspn strftime stripcslashes stripos stripslashes stristr strlen
|
|
||||||
strnatcasecmp strnatcmp strncasecmp strncmp strpbrk strpos strptime strrchr strrev strripos strrpos strspn
|
|
||||||
strstr strtok strtolower strtotime strtoupper strtr strval substr symlink syslog system tan tanh tempnam
|
|
||||||
textdomain time tmpfile touch trim uasort ucfirst ucwords uksort umask uniqid unixtojd unlink unpack
|
|
||||||
unserialize unset urldecode urlencode usleep usort vfprintf virtual vprintf vsprintf wordwrap
|
|
||||||
array_change_key_case array_chunk array_combine array_count_values array_diff array_diff_assoc
|
|
||||||
array_diff_key array_diff_uassoc array_diff_ukey array_fill array_fill_keys array_filter array_flip
|
|
||||||
array_intersect array_intersect_assoc array_intersect_key array_intersect_uassoc array_intersect_ukey
|
|
||||||
array_key_exists array_keys array_map array_merge array_merge_recursive array_multisort array_pad
|
|
||||||
array_pop array_product array_push array_rand array_reduce array_reverse array_search array_shift
|
|
||||||
array_slice array_splice array_sum array_udiff array_udiff_assoc array_udiff_uassoc array_uintersect
|
|
||||||
array_uintersect_assoc array_uintersect_uassoc array_unique array_unshift array_values array_walk
|
|
||||||
array_walk_recursive
|
|
||||||
assert_options base_convert base64_decode base64_encode
|
|
||||||
chunk_split class_exists class_implements class_parents
|
|
||||||
count_chars debug_backtrace debug_print_backtrace debug_zval_dump
|
|
||||||
error_get_last error_log error_reporting extension_loaded
|
|
||||||
file_exists file_get_contents file_put_contents load_file
|
|
||||||
func_get_arg func_get_args func_num_args function_exists
|
|
||||||
get_browser get_called_class get_cfg_var get_class get_class_methods get_class_vars
|
|
||||||
get_current_user get_declared_classes get_declared_interfaces get_defined_constants
|
|
||||||
get_defined_functions get_defined_vars get_extension_funcs get_headers get_html_translation_table
|
|
||||||
get_include_path get_included_files get_loaded_extensions get_magic_quotes_gpc get_magic_quotes_runtime
|
|
||||||
get_meta_tags get_object_vars get_parent_class get_required_filesget_resource_type
|
|
||||||
gc_collect_cycles gc_disable gc_enable gc_enabled
|
|
||||||
halt_compiler headers_list headers_sent highlight_file highlight_string
|
|
||||||
html_entity_decode htmlspecialchars_decode
|
|
||||||
in_array include_once inclued_get_data
|
|
||||||
is_a is_array is_binary is_bool is_buffer is_callable is_dir is_double is_executable is_file is_finite
|
|
||||||
is_float is_infinite is_int is_integer is_link is_long is_nan is_null is_numeric is_object is_readable
|
|
||||||
is_real is_resource is_scalar is_soap_fault is_string is_subclass_of is_unicode is_uploaded_file
|
|
||||||
is_writable is_writeable
|
|
||||||
locale_get_default locale_set_default
|
|
||||||
number_format override_function parse_str parse_url
|
|
||||||
php_check_syntax php_ini_loaded_file php_ini_scanned_files php_logo_guid php_sapi_name
|
|
||||||
php_strip_whitespace php_uname
|
|
||||||
preg_filter preg_grep preg_last_error preg_match preg_match_all preg_quote preg_replace
|
|
||||||
preg_replace_callback preg_split print_r
|
|
||||||
require_once register_shutdown_function register_tick_function
|
|
||||||
set_error_handler set_exception_handler set_file_buffer set_include_path
|
|
||||||
set_magic_quotes_runtime set_time_limit shell_exec
|
|
||||||
str_getcsv str_ireplace str_pad str_repeat str_replace str_rot13 str_shuffle str_split str_word_count
|
|
||||||
strip_tags substr_compare substr_count substr_replace
|
|
||||||
time_nanosleep time_sleep_until
|
|
||||||
token_get_all token_name trigger_error
|
|
||||||
unregister_tick_function use_soap_error_handler user_error
|
|
||||||
utf8_decode utf8_encode var_dump var_export
|
|
||||||
version_compare
|
|
||||||
zend_logo_guid zend_thread_id zend_version
|
|
||||||
create_function call_user_func_array
|
|
||||||
posix_access posix_ctermid posix_get_last_error posix_getcwd posix_getegid
|
|
||||||
posix_geteuid posix_getgid posix_getgrgid posix_getgrnam posix_getgroups
|
|
||||||
posix_getlogin posix_getpgid posix_getpgrp posix_getpid posix_getppid
|
|
||||||
posix_getpwnam posix_getpwuid posix_getrlimit posix_getsid posix_getuid
|
|
||||||
posix_initgroups posix_isatty posix_kill posix_mkfifo posix_mknod
|
|
||||||
posix_setegid posix_seteuid posix_setgid posix_setpgid posix_setsid
|
|
||||||
posix_setuid posix_strerror posix_times posix_ttyname posix_uname
|
|
||||||
pcntl_alarm pcntl_exec pcntl_fork pcntl_getpriority pcntl_setpriority
|
|
||||||
pcntl_signal pcntl_signal_dispatch pcntl_sigprocmask pcntl_sigtimedwait
|
|
||||||
pcntl_sigwaitinfo pcntl_wait pcntl_waitpid pcntl_wexitstatus pcntl_wifexited
|
|
||||||
pcntl_wifsignaled pcntl_wifstopped pcntl_wstopsig pcntl_wtermsig
|
|
||||||
]
|
|
||||||
# TODO: more built-in PHP functions?
|
|
||||||
|
|
||||||
EXCEPTIONS = %w[
|
|
||||||
E_ERROR E_WARNING E_PARSE E_NOTICE E_CORE_ERROR E_CORE_WARNING E_COMPILE_ERROR E_COMPILE_WARNING
|
|
||||||
E_USER_ERROR E_USER_WARNING E_USER_NOTICE E_DEPRECATED E_USER_DEPRECATED E_ALL E_STRICT
|
|
||||||
]
|
|
||||||
|
|
||||||
CONSTANTS = %w[
|
|
||||||
null true false self parent
|
|
||||||
__LINE__ __DIR__ __FILE__ __LINE__
|
|
||||||
__CLASS__ __NAMESPACE__ __METHOD__ __FUNCTION__
|
|
||||||
PHP_VERSION PHP_MAJOR_VERSION PHP_MINOR_VERSION PHP_RELEASE_VERSION PHP_VERSION_ID PHP_EXTRA_VERSION PHP_ZTS
|
|
||||||
PHP_DEBUG PHP_MAXPATHLEN PHP_OS PHP_SAPI PHP_EOL PHP_INT_MAX PHP_INT_SIZE DEFAULT_INCLUDE_PATH
|
|
||||||
PEAR_INSTALL_DIR PEAR_EXTENSION_DIR PHP_EXTENSION_DIR PHP_PREFIX PHP_BINDIR PHP_LIBDIR PHP_DATADIR
|
|
||||||
PHP_SYSCONFDIR PHP_LOCALSTATEDIR PHP_CONFIG_FILE_PATH PHP_CONFIG_FILE_SCAN_DIR PHP_SHLIB_SUFFIX
|
|
||||||
PHP_OUTPUT_HANDLER_START PHP_OUTPUT_HANDLER_CONT PHP_OUTPUT_HANDLER_END
|
|
||||||
__COMPILER_HALT_OFFSET__
|
|
||||||
EXTR_OVERWRITE EXTR_SKIP EXTR_PREFIX_SAME EXTR_PREFIX_ALL EXTR_PREFIX_INVALID EXTR_PREFIX_IF_EXISTS
|
|
||||||
EXTR_IF_EXISTS SORT_ASC SORT_DESC SORT_REGULAR SORT_NUMERIC SORT_STRING CASE_LOWER CASE_UPPER COUNT_NORMAL
|
|
||||||
COUNT_RECURSIVE ASSERT_ACTIVE ASSERT_CALLBACK ASSERT_BAIL ASSERT_WARNING ASSERT_QUIET_EVAL CONNECTION_ABORTED
|
|
||||||
CONNECTION_NORMAL CONNECTION_TIMEOUT INI_USER INI_PERDIR INI_SYSTEM INI_ALL M_E M_LOG2E M_LOG10E M_LN2 M_LN10
|
|
||||||
M_PI M_PI_2 M_PI_4 M_1_PI M_2_PI M_2_SQRTPI M_SQRT2 M_SQRT1_2 CRYPT_SALT_LENGTH CRYPT_STD_DES CRYPT_EXT_DES
|
|
||||||
CRYPT_MD5 CRYPT_BLOWFISH DIRECTORY_SEPARATOR SEEK_SET SEEK_CUR SEEK_END LOCK_SH LOCK_EX LOCK_UN LOCK_NB
|
|
||||||
HTML_SPECIALCHARS HTML_ENTITIES ENT_COMPAT ENT_QUOTES ENT_NOQUOTES INFO_GENERAL INFO_CREDITS
|
|
||||||
INFO_CONFIGURATION INFO_MODULES INFO_ENVIRONMENT INFO_VARIABLES INFO_LICENSE INFO_ALL CREDITS_GROUP
|
|
||||||
CREDITS_GENERAL CREDITS_SAPI CREDITS_MODULES CREDITS_DOCS CREDITS_FULLPAGE CREDITS_QA CREDITS_ALL STR_PAD_LEFT
|
|
||||||
STR_PAD_RIGHT STR_PAD_BOTH PATHINFO_DIRNAME PATHINFO_BASENAME PATHINFO_EXTENSION PATH_SEPARATOR CHAR_MAX
|
|
||||||
LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_ALL LC_MESSAGES ABDAY_1 ABDAY_2 ABDAY_3 ABDAY_4 ABDAY_5
|
|
||||||
ABDAY_6 ABDAY_7 DAY_1 DAY_2 DAY_3 DAY_4 DAY_5 DAY_6 DAY_7 ABMON_1 ABMON_2 ABMON_3 ABMON_4 ABMON_5 ABMON_6
|
|
||||||
ABMON_7 ABMON_8 ABMON_9 ABMON_10 ABMON_11 ABMON_12 MON_1 MON_2 MON_3 MON_4 MON_5 MON_6 MON_7 MON_8 MON_9
|
|
||||||
MON_10 MON_11 MON_12 AM_STR PM_STR D_T_FMT D_FMT T_FMT T_FMT_AMPM ERA ERA_YEAR ERA_D_T_FMT ERA_D_FMT ERA_T_FMT
|
|
||||||
ALT_DIGITS INT_CURR_SYMBOL CURRENCY_SYMBOL CRNCYSTR MON_DECIMAL_POINT MON_THOUSANDS_SEP MON_GROUPING
|
|
||||||
POSITIVE_SIGN NEGATIVE_SIGN INT_FRAC_DIGITS FRAC_DIGITS P_CS_PRECEDES P_SEP_BY_SPACE N_CS_PRECEDES
|
|
||||||
N_SEP_BY_SPACE P_SIGN_POSN N_SIGN_POSN DECIMAL_POINT RADIXCHAR THOUSANDS_SEP THOUSEP GROUPING YESEXPR NOEXPR
|
|
||||||
YESSTR NOSTR CODESET LOG_EMERG LOG_ALERT LOG_CRIT LOG_ERR LOG_WARNING LOG_NOTICE LOG_INFO LOG_DEBUG LOG_KERN
|
|
||||||
LOG_USER LOG_MAIL LOG_DAEMON LOG_AUTH LOG_SYSLOG LOG_LPR LOG_NEWS LOG_UUCP LOG_CRON LOG_AUTHPRIV LOG_LOCAL0
|
|
||||||
LOG_LOCAL1 LOG_LOCAL2 LOG_LOCAL3 LOG_LOCAL4 LOG_LOCAL5 LOG_LOCAL6 LOG_LOCAL7 LOG_PID LOG_CONS LOG_ODELAY
|
|
||||||
LOG_NDELAY LOG_NOWAIT LOG_PERROR
|
|
||||||
]
|
|
||||||
|
|
||||||
PREDEFINED = %w[
|
|
||||||
$GLOBALS $_SERVER $_GET $_POST $_FILES $_REQUEST $_SESSION $_ENV
|
|
||||||
$_COOKIE $php_errormsg $HTTP_RAW_POST_DATA $http_response_header
|
|
||||||
$argc $argv
|
|
||||||
]
|
|
||||||
|
|
||||||
IDENT_KIND = WordList::CaseIgnoring.new(:ident).
|
|
||||||
add(KEYWORDS, :keyword).
|
|
||||||
add(TYPES, :predefined_type).
|
|
||||||
add(LANGUAGE_CONSTRUCTS, :keyword).
|
|
||||||
add(BUILTIN_FUNCTIONS, :predefined).
|
|
||||||
add(CLASSES, :predefined_constant).
|
|
||||||
add(EXCEPTIONS, :exception).
|
|
||||||
add(CONSTANTS, :predefined_constant)
|
|
||||||
|
|
||||||
VARIABLE_KIND = WordList.new(:local_variable).
|
|
||||||
add(PREDEFINED, :predefined)
|
|
||||||
end
|
|
||||||
|
|
||||||
module RE # :nodoc:
|
|
||||||
|
|
||||||
PHP_START = /
|
|
||||||
<script\s+[^>]*?language\s*=\s*"php"[^>]*?> |
|
|
||||||
<script\s+[^>]*?language\s*=\s*'php'[^>]*?> |
|
|
||||||
<\?php\d? |
|
|
||||||
<\?(?!xml)
|
|
||||||
/xi
|
|
||||||
|
|
||||||
PHP_END = %r!
|
|
||||||
</script> |
|
|
||||||
\?>
|
|
||||||
!xi
|
|
||||||
|
|
||||||
HTML_INDICATOR = /<!DOCTYPE html|<(?:html|body|div|p)[> ]/i
|
|
||||||
|
|
||||||
IDENTIFIER = /[a-z_\x7f-\xFF][a-z0-9_\x7f-\xFF]*/i
|
|
||||||
VARIABLE = /\$#{IDENTIFIER}/
|
|
||||||
|
|
||||||
OPERATOR = /
|
|
||||||
\.(?!\d)=? | # dot that is not decimal point, string concatenation
|
|
||||||
&& | \|\| | # logic
|
|
||||||
:: | -> | => | # scope, member, dictionary
|
|
||||||
\\(?!\n) | # namespace
|
|
||||||
\+\+ | -- | # increment, decrement
|
|
||||||
[,;?:()\[\]{}] | # simple delimiters
|
|
||||||
[-+*\/%&|^]=? | # ordinary math, binary logic, assignment shortcuts
|
|
||||||
[~$] | # whatever
|
|
||||||
=& | # reference assignment
|
|
||||||
[=!]=?=? | <> | # comparison and assignment
|
|
||||||
<<=? | >>=? | [<>]=? # comparison and shift
|
|
||||||
/x
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def scan_tokens encoder, options
|
|
||||||
|
|
||||||
if check(RE::PHP_START) || # starts with <?
|
|
||||||
(match?(/\s*<\S/) && check(/.{1,1000}#{RE::PHP_START}/om)) || # starts with tag and contains <?
|
|
||||||
check(/.{0,1000}#{RE::HTML_INDICATOR}/om) ||
|
|
||||||
check(/.{1,100}#{RE::PHP_START}/om) # PHP start after max 100 chars
|
|
||||||
# is HTML with embedded PHP, so start with HTML
|
|
||||||
states = [:initial]
|
|
||||||
else
|
|
||||||
# is just PHP, so start with PHP surrounded by HTML
|
|
||||||
states = [:initial, :php]
|
|
||||||
end
|
|
||||||
|
|
||||||
label_expected = true
|
|
||||||
case_expected = false
|
|
||||||
|
|
||||||
heredoc_delimiter = nil
|
|
||||||
delimiter = nil
|
|
||||||
modifier = nil
|
|
||||||
|
|
||||||
until eos?
|
|
||||||
|
|
||||||
case states.last
|
|
||||||
|
|
||||||
when :initial # HTML
|
|
||||||
if match = scan(RE::PHP_START)
|
|
||||||
encoder.text_token match, :inline_delimiter
|
|
||||||
label_expected = true
|
|
||||||
states << :php
|
|
||||||
else
|
|
||||||
match = scan_until(/(?=#{RE::PHP_START})/o) || scan_rest
|
|
||||||
@html_scanner.tokenize match unless match.empty?
|
|
||||||
end
|
|
||||||
|
|
||||||
when :php
|
|
||||||
if match = scan(/\s+/)
|
|
||||||
encoder.text_token match, :space
|
|
||||||
|
|
||||||
elsif match = scan(%r! (?m: \/\* (?: .*? \*\/ | .* ) ) | (?://|\#) .*? (?=#{RE::PHP_END}|$) !xo)
|
|
||||||
encoder.text_token match, :comment
|
|
||||||
|
|
||||||
elsif match = scan(RE::IDENTIFIER)
|
|
||||||
kind = Words::IDENT_KIND[match]
|
|
||||||
if kind == :ident && label_expected && check(/:(?!:)/)
|
|
||||||
kind = :label
|
|
||||||
label_expected = true
|
|
||||||
else
|
|
||||||
label_expected = false
|
|
||||||
if kind == :ident && match =~ /^[A-Z]/
|
|
||||||
kind = :constant
|
|
||||||
elsif kind == :keyword
|
|
||||||
case match
|
|
||||||
when 'class'
|
|
||||||
states << :class_expected
|
|
||||||
when 'function'
|
|
||||||
states << :function_expected
|
|
||||||
when 'case', 'default'
|
|
||||||
case_expected = true
|
|
||||||
end
|
|
||||||
elsif match == 'b' && check(/['"]/) # binary string literal
|
|
||||||
modifier = match
|
|
||||||
next
|
|
||||||
end
|
|
||||||
end
|
|
||||||
encoder.text_token match, kind
|
|
||||||
|
|
||||||
elsif match = scan(/(?:\d+\.\d*|\d*\.\d+)(?:e[-+]?\d+)?|\d+e[-+]?\d+/i)
|
|
||||||
label_expected = false
|
|
||||||
encoder.text_token match, :float
|
|
||||||
|
|
||||||
elsif match = scan(/0x[0-9a-fA-F]+/)
|
|
||||||
label_expected = false
|
|
||||||
encoder.text_token match, :hex
|
|
||||||
|
|
||||||
elsif match = scan(/\d+/)
|
|
||||||
label_expected = false
|
|
||||||
encoder.text_token match, :integer
|
|
||||||
|
|
||||||
elsif match = scan(/['"`]/)
|
|
||||||
encoder.begin_group :string
|
|
||||||
if modifier
|
|
||||||
encoder.text_token modifier, :modifier
|
|
||||||
modifier = nil
|
|
||||||
end
|
|
||||||
delimiter = match
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
states.push match == "'" ? :sqstring : :dqstring
|
|
||||||
|
|
||||||
elsif match = scan(RE::VARIABLE)
|
|
||||||
label_expected = false
|
|
||||||
encoder.text_token match, Words::VARIABLE_KIND[match]
|
|
||||||
|
|
||||||
elsif match = scan(/\{/)
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
label_expected = true
|
|
||||||
states.push :php
|
|
||||||
|
|
||||||
elsif match = scan(/\}/)
|
|
||||||
if states.size == 1
|
|
||||||
encoder.text_token match, :error
|
|
||||||
else
|
|
||||||
states.pop
|
|
||||||
if states.last.is_a?(::Array)
|
|
||||||
delimiter = states.last[1]
|
|
||||||
states[-1] = states.last[0]
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
encoder.end_group :inline
|
|
||||||
else
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
label_expected = true
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif match = scan(/@/)
|
|
||||||
label_expected = false
|
|
||||||
encoder.text_token match, :exception
|
|
||||||
|
|
||||||
elsif match = scan(RE::PHP_END)
|
|
||||||
encoder.text_token match, :inline_delimiter
|
|
||||||
states = [:initial]
|
|
||||||
|
|
||||||
elsif match = scan(/<<<(?:(#{RE::IDENTIFIER})|"(#{RE::IDENTIFIER})"|'(#{RE::IDENTIFIER})')/o)
|
|
||||||
encoder.begin_group :string
|
|
||||||
# warn 'heredoc in heredoc?' if heredoc_delimiter
|
|
||||||
heredoc_delimiter = Regexp.escape(self[1] || self[2] || self[3])
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
states.push self[3] ? :sqstring : :dqstring
|
|
||||||
heredoc_delimiter = /#{heredoc_delimiter}(?=;?$)/
|
|
||||||
|
|
||||||
elsif match = scan(/#{RE::OPERATOR}/o)
|
|
||||||
label_expected = match == ';'
|
|
||||||
if case_expected
|
|
||||||
label_expected = true if match == ':'
|
|
||||||
case_expected = false
|
|
||||||
end
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
|
|
||||||
else
|
|
||||||
encoder.text_token getch, :error
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
when :sqstring
|
|
||||||
if match = scan(heredoc_delimiter ? /[^\\\n]+/ : /[^'\\]+/)
|
|
||||||
encoder.text_token match, :content
|
|
||||||
elsif !heredoc_delimiter && match = scan(/'/)
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
encoder.end_group :string
|
|
||||||
delimiter = nil
|
|
||||||
label_expected = false
|
|
||||||
states.pop
|
|
||||||
elsif heredoc_delimiter && match = scan(/\n/)
|
|
||||||
if scan heredoc_delimiter
|
|
||||||
encoder.text_token "\n", :content
|
|
||||||
encoder.text_token matched, :delimiter
|
|
||||||
encoder.end_group :string
|
|
||||||
heredoc_delimiter = nil
|
|
||||||
label_expected = false
|
|
||||||
states.pop
|
|
||||||
else
|
|
||||||
encoder.text_token match, :content
|
|
||||||
end
|
|
||||||
elsif match = scan(heredoc_delimiter ? /\\\\/ : /\\[\\'\n]/)
|
|
||||||
encoder.text_token match, :char
|
|
||||||
elsif match = scan(/\\./m)
|
|
||||||
encoder.text_token match, :content
|
|
||||||
elsif match = scan(/\\/)
|
|
||||||
encoder.text_token match, :error
|
|
||||||
else
|
|
||||||
states.pop
|
|
||||||
end
|
|
||||||
|
|
||||||
when :dqstring
|
|
||||||
if match = scan(heredoc_delimiter ? /[^${\\\n]+/ : (delimiter == '"' ? /[^"${\\]+/ : /[^`${\\]+/))
|
|
||||||
encoder.text_token match, :content
|
|
||||||
elsif !heredoc_delimiter && match = scan(delimiter == '"' ? /"/ : /`/)
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
encoder.end_group :string
|
|
||||||
delimiter = nil
|
|
||||||
label_expected = false
|
|
||||||
states.pop
|
|
||||||
elsif heredoc_delimiter && match = scan(/\n/)
|
|
||||||
if scan heredoc_delimiter
|
|
||||||
encoder.text_token "\n", :content
|
|
||||||
encoder.text_token matched, :delimiter
|
|
||||||
encoder.end_group :string
|
|
||||||
heredoc_delimiter = nil
|
|
||||||
label_expected = false
|
|
||||||
states.pop
|
|
||||||
else
|
|
||||||
encoder.text_token match, :content
|
|
||||||
end
|
|
||||||
elsif match = scan(/\\(?:x[0-9A-Fa-f]{1,2}|[0-7]{1,3})/)
|
|
||||||
encoder.text_token match, :char
|
|
||||||
elsif match = scan(heredoc_delimiter ? /\\[nrtvf\\$]/ : (delimiter == '"' ? /\\[nrtvf\\$"]/ : /\\[nrtvf\\$`]/))
|
|
||||||
encoder.text_token match, :char
|
|
||||||
elsif match = scan(/\\./m)
|
|
||||||
encoder.text_token match, :content
|
|
||||||
elsif match = scan(/\\/)
|
|
||||||
encoder.text_token match, :error
|
|
||||||
elsif match = scan(/#{RE::VARIABLE}/o)
|
|
||||||
if check(/\[#{RE::IDENTIFIER}\]/o)
|
|
||||||
encoder.begin_group :inline
|
|
||||||
encoder.text_token match, :local_variable
|
|
||||||
encoder.text_token scan(/\[/), :operator
|
|
||||||
encoder.text_token scan(/#{RE::IDENTIFIER}/o), :ident
|
|
||||||
encoder.text_token scan(/\]/), :operator
|
|
||||||
encoder.end_group :inline
|
|
||||||
elsif check(/\[/)
|
|
||||||
match << scan(/\[['"]?#{RE::IDENTIFIER}?['"]?\]?/o)
|
|
||||||
encoder.text_token match, :error
|
|
||||||
elsif check(/->#{RE::IDENTIFIER}/o)
|
|
||||||
encoder.begin_group :inline
|
|
||||||
encoder.text_token match, :local_variable
|
|
||||||
encoder.text_token scan(/->/), :operator
|
|
||||||
encoder.text_token scan(/#{RE::IDENTIFIER}/o), :ident
|
|
||||||
encoder.end_group :inline
|
|
||||||
elsif check(/->/)
|
|
||||||
match << scan(/->/)
|
|
||||||
encoder.text_token match, :error
|
|
||||||
else
|
|
||||||
encoder.text_token match, :local_variable
|
|
||||||
end
|
|
||||||
elsif match = scan(/\{/)
|
|
||||||
if check(/\$/)
|
|
||||||
encoder.begin_group :inline
|
|
||||||
states[-1] = [states.last, delimiter]
|
|
||||||
delimiter = nil
|
|
||||||
states.push :php
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
else
|
|
||||||
encoder.text_token match, :content
|
|
||||||
end
|
|
||||||
elsif match = scan(/\$\{#{RE::IDENTIFIER}\}/o)
|
|
||||||
encoder.text_token match, :local_variable
|
|
||||||
elsif match = scan(/\$/)
|
|
||||||
encoder.text_token match, :content
|
|
||||||
else
|
|
||||||
states.pop
|
|
||||||
end
|
|
||||||
|
|
||||||
when :class_expected
|
|
||||||
if match = scan(/\s+/)
|
|
||||||
encoder.text_token match, :space
|
|
||||||
elsif match = scan(/#{RE::IDENTIFIER}/o)
|
|
||||||
encoder.text_token match, :class
|
|
||||||
states.pop
|
|
||||||
else
|
|
||||||
states.pop
|
|
||||||
end
|
|
||||||
|
|
||||||
when :function_expected
|
|
||||||
if match = scan(/\s+/)
|
|
||||||
encoder.text_token match, :space
|
|
||||||
elsif match = scan(/&/)
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
elsif match = scan(/#{RE::IDENTIFIER}/o)
|
|
||||||
encoder.text_token match, :function
|
|
||||||
states.pop
|
|
||||||
else
|
|
||||||
states.pop
|
|
||||||
end
|
|
||||||
|
|
||||||
else
|
|
||||||
raise_inspect 'Unknown state!', encoder, states
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,287 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
# Scanner for Python. Supports Python 3.
|
|
||||||
#
|
|
||||||
# Based on pygments' PythonLexer, see
|
|
||||||
# http://dev.pocoo.org/projects/pygments/browser/pygments/lexers/agile.py.
|
|
||||||
class Python < Scanner
|
|
||||||
|
|
||||||
register_for :python
|
|
||||||
file_extension 'py'
|
|
||||||
|
|
||||||
KEYWORDS = [
|
|
||||||
'and', 'as', 'assert', 'break', 'class', 'continue', 'def',
|
|
||||||
'del', 'elif', 'else', 'except', 'finally', 'for',
|
|
||||||
'from', 'global', 'if', 'import', 'in', 'is', 'lambda', 'not',
|
|
||||||
'or', 'pass', 'raise', 'return', 'try', 'while', 'with', 'yield',
|
|
||||||
'nonlocal', # new in Python 3
|
|
||||||
] # :nodoc:
|
|
||||||
|
|
||||||
OLD_KEYWORDS = [
|
|
||||||
'exec', 'print', # gone in Python 3
|
|
||||||
] # :nodoc:
|
|
||||||
|
|
||||||
PREDEFINED_METHODS_AND_TYPES = %w[
|
|
||||||
__import__ abs all any apply basestring bin bool buffer
|
|
||||||
bytearray bytes callable chr classmethod cmp coerce compile
|
|
||||||
complex delattr dict dir divmod enumerate eval execfile exit
|
|
||||||
file filter float frozenset getattr globals hasattr hash hex id
|
|
||||||
input int intern isinstance issubclass iter len list locals
|
|
||||||
long map max min next object oct open ord pow property range
|
|
||||||
raw_input reduce reload repr reversed round set setattr slice
|
|
||||||
sorted staticmethod str sum super tuple type unichr unicode
|
|
||||||
vars xrange zip
|
|
||||||
] # :nodoc:
|
|
||||||
|
|
||||||
PREDEFINED_EXCEPTIONS = %w[
|
|
||||||
ArithmeticError AssertionError AttributeError
|
|
||||||
BaseException DeprecationWarning EOFError EnvironmentError
|
|
||||||
Exception FloatingPointError FutureWarning GeneratorExit IOError
|
|
||||||
ImportError ImportWarning IndentationError IndexError KeyError
|
|
||||||
KeyboardInterrupt LookupError MemoryError NameError
|
|
||||||
NotImplemented NotImplementedError OSError OverflowError
|
|
||||||
OverflowWarning PendingDeprecationWarning ReferenceError
|
|
||||||
RuntimeError RuntimeWarning StandardError StopIteration
|
|
||||||
SyntaxError SyntaxWarning SystemError SystemExit TabError
|
|
||||||
TypeError UnboundLocalError UnicodeDecodeError
|
|
||||||
UnicodeEncodeError UnicodeError UnicodeTranslateError
|
|
||||||
UnicodeWarning UserWarning ValueError Warning ZeroDivisionError
|
|
||||||
] # :nodoc:
|
|
||||||
|
|
||||||
PREDEFINED_VARIABLES_AND_CONSTANTS = [
|
|
||||||
'False', 'True', 'None', # "keywords" since Python 3
|
|
||||||
'self', 'Ellipsis', 'NotImplemented',
|
|
||||||
] # :nodoc:
|
|
||||||
|
|
||||||
IDENT_KIND = WordList.new(:ident).
|
|
||||||
add(KEYWORDS, :keyword).
|
|
||||||
add(OLD_KEYWORDS, :old_keyword).
|
|
||||||
add(PREDEFINED_METHODS_AND_TYPES, :predefined).
|
|
||||||
add(PREDEFINED_VARIABLES_AND_CONSTANTS, :predefined_constant).
|
|
||||||
add(PREDEFINED_EXCEPTIONS, :exception) # :nodoc:
|
|
||||||
|
|
||||||
NAME = / [^\W\d] \w* /x # :nodoc:
|
|
||||||
ESCAPE = / [abfnrtv\n\\'"] | x[a-fA-F0-9]{1,2} | [0-7]{1,3} /x # :nodoc:
|
|
||||||
UNICODE_ESCAPE = / u[a-fA-F0-9]{4} | U[a-fA-F0-9]{8} | N\{[-\w ]+\} /x # :nodoc:
|
|
||||||
|
|
||||||
OPERATOR = /
|
|
||||||
\.\.\. | # ellipsis
|
|
||||||
\.(?!\d) | # dot but not decimal point
|
|
||||||
[,;:()\[\]{}] | # simple delimiters
|
|
||||||
\/\/=? | \*\*=? | # special math
|
|
||||||
[-+*\/%&|^]=? | # ordinary math and binary logic
|
|
||||||
[~`] | # binary complement and inspection
|
|
||||||
<<=? | >>=? | [<>=]=? | != # comparison and assignment
|
|
||||||
/x # :nodoc:
|
|
||||||
|
|
||||||
STRING_DELIMITER_REGEXP = Hash.new { |h, delimiter|
|
|
||||||
h[delimiter] = Regexp.union delimiter # :nodoc:
|
|
||||||
}
|
|
||||||
|
|
||||||
STRING_CONTENT_REGEXP = Hash.new { |h, delimiter|
|
|
||||||
h[delimiter] = / [^\\\n]+? (?= \\ | $ | #{Regexp.escape(delimiter)} ) /x # :nodoc:
|
|
||||||
}
|
|
||||||
|
|
||||||
DEF_NEW_STATE = WordList.new(:initial).
|
|
||||||
add(%w(def), :def_expected).
|
|
||||||
add(%w(import from), :include_expected).
|
|
||||||
add(%w(class), :class_expected) # :nodoc:
|
|
||||||
|
|
||||||
DESCRIPTOR = /
|
|
||||||
#{NAME}
|
|
||||||
(?: \. #{NAME} )*
|
|
||||||
| \*
|
|
||||||
/x # :nodoc:
|
|
||||||
|
|
||||||
DOCSTRING_COMING = /
|
|
||||||
[ \t]* u?r? ("""|''')
|
|
||||||
/x # :nodoc:
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def scan_tokens encoder, options
|
|
||||||
|
|
||||||
state = :initial
|
|
||||||
string_delimiter = nil
|
|
||||||
string_raw = false
|
|
||||||
string_type = nil
|
|
||||||
docstring_coming = match?(/#{DOCSTRING_COMING}/o)
|
|
||||||
last_token_dot = false
|
|
||||||
unicode = string.respond_to?(:encoding) && string.encoding.name == 'UTF-8'
|
|
||||||
from_import_state = []
|
|
||||||
|
|
||||||
until eos?
|
|
||||||
|
|
||||||
if state == :string
|
|
||||||
if match = scan(STRING_DELIMITER_REGEXP[string_delimiter])
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
encoder.end_group string_type
|
|
||||||
string_type = nil
|
|
||||||
state = :initial
|
|
||||||
next
|
|
||||||
elsif string_delimiter.size == 3 && match = scan(/\n/)
|
|
||||||
encoder.text_token match, :content
|
|
||||||
elsif match = scan(STRING_CONTENT_REGEXP[string_delimiter])
|
|
||||||
encoder.text_token match, :content
|
|
||||||
elsif !string_raw && match = scan(/ \\ #{ESCAPE} /ox)
|
|
||||||
encoder.text_token match, :char
|
|
||||||
elsif match = scan(/ \\ #{UNICODE_ESCAPE} /ox)
|
|
||||||
encoder.text_token match, :char
|
|
||||||
elsif match = scan(/ \\ . /x)
|
|
||||||
encoder.text_token match, :content
|
|
||||||
elsif match = scan(/ \\ | $ /x)
|
|
||||||
encoder.end_group string_type
|
|
||||||
string_type = nil
|
|
||||||
encoder.text_token match, :error
|
|
||||||
state = :initial
|
|
||||||
else
|
|
||||||
raise_inspect "else case \" reached; %p not handled." % peek(1), encoder, state
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif match = scan(/ [ \t]+ | \\?\n /x)
|
|
||||||
encoder.text_token match, :space
|
|
||||||
if match == "\n"
|
|
||||||
state = :initial if state == :include_expected
|
|
||||||
docstring_coming = true if match?(/#{DOCSTRING_COMING}/o)
|
|
||||||
end
|
|
||||||
next
|
|
||||||
|
|
||||||
elsif match = scan(/ \# [^\n]* /mx)
|
|
||||||
encoder.text_token match, :comment
|
|
||||||
next
|
|
||||||
|
|
||||||
elsif state == :initial
|
|
||||||
|
|
||||||
if match = scan(/#{OPERATOR}/o)
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
|
|
||||||
elsif match = scan(/(u?r?|b)?("""|"|'''|')/i)
|
|
||||||
string_delimiter = self[2]
|
|
||||||
string_type = docstring_coming ? :docstring : :string
|
|
||||||
docstring_coming = false if docstring_coming
|
|
||||||
encoder.begin_group string_type
|
|
||||||
string_raw = false
|
|
||||||
modifiers = self[1]
|
|
||||||
unless modifiers.empty?
|
|
||||||
string_raw = !!modifiers.index(?r)
|
|
||||||
encoder.text_token modifiers, :modifier
|
|
||||||
match = string_delimiter
|
|
||||||
end
|
|
||||||
state = :string
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
|
|
||||||
# TODO: backticks
|
|
||||||
|
|
||||||
elsif match = scan(unicode ? /#{NAME}/uo : /#{NAME}/o)
|
|
||||||
kind = IDENT_KIND[match]
|
|
||||||
# TODO: keyword arguments
|
|
||||||
kind = :ident if last_token_dot
|
|
||||||
if kind == :old_keyword
|
|
||||||
kind = check(/\(/) ? :ident : :keyword
|
|
||||||
elsif kind == :predefined && check(/ *=/)
|
|
||||||
kind = :ident
|
|
||||||
elsif kind == :keyword
|
|
||||||
state = DEF_NEW_STATE[match]
|
|
||||||
from_import_state << match.to_sym if state == :include_expected
|
|
||||||
end
|
|
||||||
encoder.text_token match, kind
|
|
||||||
|
|
||||||
elsif match = scan(/@[a-zA-Z0-9_.]+[lL]?/)
|
|
||||||
encoder.text_token match, :decorator
|
|
||||||
|
|
||||||
elsif match = scan(/0[xX][0-9A-Fa-f]+[lL]?/)
|
|
||||||
encoder.text_token match, :hex
|
|
||||||
|
|
||||||
elsif match = scan(/0[bB][01]+[lL]?/)
|
|
||||||
encoder.text_token match, :binary
|
|
||||||
|
|
||||||
elsif match = scan(/(?:\d*\.\d+|\d+\.\d*)(?:[eE][+-]?\d+)?|\d+[eE][+-]?\d+/)
|
|
||||||
if scan(/[jJ]/)
|
|
||||||
match << matched
|
|
||||||
encoder.text_token match, :imaginary
|
|
||||||
else
|
|
||||||
encoder.text_token match, :float
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif match = scan(/0[oO][0-7]+|0[0-7]+(?![89.eE])[lL]?/)
|
|
||||||
encoder.text_token match, :octal
|
|
||||||
|
|
||||||
elsif match = scan(/\d+([lL])?/)
|
|
||||||
if self[1] == nil && scan(/[jJ]/)
|
|
||||||
match << matched
|
|
||||||
encoder.text_token match, :imaginary
|
|
||||||
else
|
|
||||||
encoder.text_token match, :integer
|
|
||||||
end
|
|
||||||
|
|
||||||
else
|
|
||||||
encoder.text_token getch, :error
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif state == :def_expected
|
|
||||||
state = :initial
|
|
||||||
if match = scan(unicode ? /#{NAME}/uo : /#{NAME}/o)
|
|
||||||
encoder.text_token match, :method
|
|
||||||
else
|
|
||||||
next
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif state == :class_expected
|
|
||||||
state = :initial
|
|
||||||
if match = scan(unicode ? /#{NAME}/uo : /#{NAME}/o)
|
|
||||||
encoder.text_token match, :class
|
|
||||||
else
|
|
||||||
next
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif state == :include_expected
|
|
||||||
if match = scan(unicode ? /#{DESCRIPTOR}/uo : /#{DESCRIPTOR}/o)
|
|
||||||
if match == 'as'
|
|
||||||
encoder.text_token match, :keyword
|
|
||||||
from_import_state << :as
|
|
||||||
elsif from_import_state.first == :from && match == 'import'
|
|
||||||
encoder.text_token match, :keyword
|
|
||||||
from_import_state << :import
|
|
||||||
elsif from_import_state.last == :as
|
|
||||||
# encoder.text_token match, match[0,1][unicode ? /[[:upper:]]/u : /[[:upper:]]/] ? :class : :method
|
|
||||||
encoder.text_token match, :ident
|
|
||||||
from_import_state.pop
|
|
||||||
elsif IDENT_KIND[match] == :keyword
|
|
||||||
unscan
|
|
||||||
match = nil
|
|
||||||
state = :initial
|
|
||||||
next
|
|
||||||
else
|
|
||||||
encoder.text_token match, :include
|
|
||||||
end
|
|
||||||
elsif match = scan(/,/)
|
|
||||||
from_import_state.pop if from_import_state.last == :as
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
else
|
|
||||||
from_import_state = []
|
|
||||||
state = :initial
|
|
||||||
next
|
|
||||||
end
|
|
||||||
|
|
||||||
else
|
|
||||||
raise_inspect 'Unknown state', encoder, state
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
last_token_dot = match == '.'
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
if state == :string
|
|
||||||
encoder.end_group string_type
|
|
||||||
end
|
|
||||||
|
|
||||||
encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,66 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
# = Debug Scanner
|
|
||||||
#
|
|
||||||
# Parses the output of the Encoders::Debug encoder.
|
|
||||||
class Raydebug < Scanner
|
|
||||||
|
|
||||||
register_for :raydebug
|
|
||||||
file_extension 'raydebug'
|
|
||||||
title 'CodeRay Token Dump'
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def scan_tokens encoder, options
|
|
||||||
|
|
||||||
opened_tokens = []
|
|
||||||
|
|
||||||
until eos?
|
|
||||||
|
|
||||||
if match = scan(/\s+/)
|
|
||||||
encoder.text_token match, :space
|
|
||||||
|
|
||||||
elsif match = scan(/ (\w+) \( ( [^\)\\]* ( \\. [^\)\\]* )* ) /x)
|
|
||||||
kind = self[1]
|
|
||||||
encoder.text_token kind, :class
|
|
||||||
encoder.text_token '(', :operator
|
|
||||||
match = self[2]
|
|
||||||
encoder.text_token match, kind.to_sym
|
|
||||||
encoder.text_token match, :operator if match = scan(/\)/)
|
|
||||||
|
|
||||||
elsif match = scan(/ (\w+) ([<\[]) /x)
|
|
||||||
kind = self[1]
|
|
||||||
case self[2]
|
|
||||||
when '<'
|
|
||||||
encoder.text_token kind, :class
|
|
||||||
when '['
|
|
||||||
encoder.text_token kind, :class
|
|
||||||
else
|
|
||||||
raise 'CodeRay bug: This case should not be reached.'
|
|
||||||
end
|
|
||||||
kind = kind.to_sym
|
|
||||||
opened_tokens << kind
|
|
||||||
encoder.begin_group kind
|
|
||||||
encoder.text_token self[2], :operator
|
|
||||||
|
|
||||||
elsif !opened_tokens.empty? && match = scan(/ [>\]] /x)
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
encoder.end_group opened_tokens.pop
|
|
||||||
|
|
||||||
else
|
|
||||||
encoder.text_token getch, :space
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
encoder.end_group opened_tokens.pop until opened_tokens.empty?
|
|
||||||
|
|
||||||
encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,461 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
# This scanner is really complex, since Ruby _is_ a complex language!
|
|
||||||
#
|
|
||||||
# It tries to highlight 100% of all common code,
|
|
||||||
# and 90% of strange codes.
|
|
||||||
#
|
|
||||||
# It is optimized for HTML highlighting, and is not very useful for
|
|
||||||
# parsing or pretty printing.
|
|
||||||
class Ruby < Scanner
|
|
||||||
|
|
||||||
register_for :ruby
|
|
||||||
file_extension 'rb'
|
|
||||||
|
|
||||||
autoload :Patterns, 'coderay/scanners/ruby/patterns'
|
|
||||||
autoload :StringState, 'coderay/scanners/ruby/string_state'
|
|
||||||
|
|
||||||
def interpreted_string_state
|
|
||||||
StringState.new :string, true, '"'
|
|
||||||
end
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def setup
|
|
||||||
@state = :initial
|
|
||||||
end
|
|
||||||
|
|
||||||
def scan_tokens encoder, options
|
|
||||||
state, heredocs = options[:state] || @state
|
|
||||||
heredocs = heredocs.dup if heredocs.is_a?(Array)
|
|
||||||
|
|
||||||
if state && state.instance_of?(StringState)
|
|
||||||
encoder.begin_group state.type
|
|
||||||
end
|
|
||||||
|
|
||||||
last_state = nil
|
|
||||||
|
|
||||||
method_call_expected = false
|
|
||||||
value_expected = true
|
|
||||||
|
|
||||||
inline_block_stack = nil
|
|
||||||
inline_block_curly_depth = 0
|
|
||||||
|
|
||||||
if heredocs
|
|
||||||
state = heredocs.shift
|
|
||||||
encoder.begin_group state.type
|
|
||||||
heredocs = nil if heredocs.empty?
|
|
||||||
end
|
|
||||||
|
|
||||||
# def_object_stack = nil
|
|
||||||
# def_object_paren_depth = 0
|
|
||||||
|
|
||||||
patterns = Patterns # avoid constant lookup
|
|
||||||
|
|
||||||
unicode = string.respond_to?(:encoding) && string.encoding.name == 'UTF-8'
|
|
||||||
|
|
||||||
until eos?
|
|
||||||
|
|
||||||
if state.instance_of? ::Symbol
|
|
||||||
|
|
||||||
if match = scan(/[ \t\f\v]+/)
|
|
||||||
encoder.text_token match, :space
|
|
||||||
|
|
||||||
elsif match = scan(/\n/)
|
|
||||||
if heredocs
|
|
||||||
unscan # heredoc scanning needs \n at start
|
|
||||||
state = heredocs.shift
|
|
||||||
encoder.begin_group state.type
|
|
||||||
heredocs = nil if heredocs.empty?
|
|
||||||
else
|
|
||||||
state = :initial if state == :undef_comma_expected
|
|
||||||
encoder.text_token match, :space
|
|
||||||
value_expected = true
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif match = scan(bol? ? / \#(!)?.* | #{patterns::RUBYDOC_OR_DATA} /ox : /\#.*/)
|
|
||||||
encoder.text_token match, self[1] ? :doctype : :comment
|
|
||||||
|
|
||||||
elsif match = scan(/\\\n/)
|
|
||||||
if heredocs
|
|
||||||
unscan # heredoc scanning needs \n at start
|
|
||||||
encoder.text_token scan(/\\/), :space
|
|
||||||
state = heredocs.shift
|
|
||||||
encoder.begin_group state.type
|
|
||||||
heredocs = nil if heredocs.empty?
|
|
||||||
else
|
|
||||||
encoder.text_token match, :space
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif state == :initial
|
|
||||||
|
|
||||||
# IDENTS #
|
|
||||||
if !method_call_expected &&
|
|
||||||
match = scan(unicode ? /#{patterns::METHOD_NAME}/uo :
|
|
||||||
/#{patterns::METHOD_NAME}/o)
|
|
||||||
value_expected = false
|
|
||||||
kind = patterns::IDENT_KIND[match]
|
|
||||||
if kind == :ident
|
|
||||||
if match[/\A[A-Z]/] && !(match[/[!?]$/] || match?(/\(/))
|
|
||||||
kind = :constant
|
|
||||||
end
|
|
||||||
elsif kind == :keyword
|
|
||||||
state = patterns::KEYWORD_NEW_STATE[match]
|
|
||||||
value_expected = true if patterns::KEYWORDS_EXPECTING_VALUE[match]
|
|
||||||
end
|
|
||||||
value_expected = true if !value_expected && check(/#{patterns::VALUE_FOLLOWS}/o)
|
|
||||||
encoder.text_token match, kind
|
|
||||||
|
|
||||||
elsif method_call_expected &&
|
|
||||||
match = scan(unicode ? /#{patterns::METHOD_AFTER_DOT}/uo :
|
|
||||||
/#{patterns::METHOD_AFTER_DOT}/o)
|
|
||||||
if method_call_expected == '::' && match[/\A[A-Z]/] && !match?(/\(/)
|
|
||||||
encoder.text_token match, :constant
|
|
||||||
else
|
|
||||||
encoder.text_token match, :ident
|
|
||||||
end
|
|
||||||
method_call_expected = false
|
|
||||||
value_expected = check(/#{patterns::VALUE_FOLLOWS}/o)
|
|
||||||
|
|
||||||
# OPERATORS #
|
|
||||||
elsif !method_call_expected && match = scan(/ (\.(?!\.)|::) | (?: \.\.\.? | ==?=? | [,\(\[\{] )() | [\)\]\}] /x)
|
|
||||||
method_call_expected = self[1]
|
|
||||||
value_expected = !method_call_expected && self[2]
|
|
||||||
if inline_block_stack
|
|
||||||
case match
|
|
||||||
when '{'
|
|
||||||
inline_block_curly_depth += 1
|
|
||||||
when '}'
|
|
||||||
inline_block_curly_depth -= 1
|
|
||||||
if inline_block_curly_depth == 0 # closing brace of inline block reached
|
|
||||||
state, inline_block_curly_depth, heredocs = inline_block_stack.pop
|
|
||||||
inline_block_stack = nil if inline_block_stack.empty?
|
|
||||||
heredocs = nil if heredocs && heredocs.empty?
|
|
||||||
encoder.text_token match, :inline_delimiter
|
|
||||||
encoder.end_group :inline
|
|
||||||
next
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
|
|
||||||
elsif match = scan(unicode ? /#{patterns::SYMBOL}/uo :
|
|
||||||
/#{patterns::SYMBOL}/o)
|
|
||||||
case delim = match[1]
|
|
||||||
when ?', ?"
|
|
||||||
encoder.begin_group :symbol
|
|
||||||
encoder.text_token ':', :symbol
|
|
||||||
match = delim.chr
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
state = self.class::StringState.new :symbol, delim == ?", match
|
|
||||||
else
|
|
||||||
encoder.text_token match, :symbol
|
|
||||||
value_expected = false
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif match = scan(/ ' (?:(?>[^'\\]*) ')? | " (?:(?>[^"\\\#]*) ")? /mx)
|
|
||||||
encoder.begin_group :string
|
|
||||||
if match.size == 1
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
state = self.class::StringState.new :string, match == '"', match # important for streaming
|
|
||||||
else
|
|
||||||
encoder.text_token match[0,1], :delimiter
|
|
||||||
encoder.text_token match[1..-2], :content if match.size > 2
|
|
||||||
encoder.text_token match[-1,1], :delimiter
|
|
||||||
encoder.end_group :string
|
|
||||||
value_expected = false
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif match = scan(unicode ? /#{patterns::INSTANCE_VARIABLE}/uo :
|
|
||||||
/#{patterns::INSTANCE_VARIABLE}/o)
|
|
||||||
value_expected = false
|
|
||||||
encoder.text_token match, :instance_variable
|
|
||||||
|
|
||||||
elsif value_expected && match = scan(/\//)
|
|
||||||
encoder.begin_group :regexp
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
state = self.class::StringState.new :regexp, true, '/'
|
|
||||||
|
|
||||||
elsif match = scan(value_expected ? /[-+]?#{patterns::NUMERIC}/o : /#{patterns::NUMERIC}/o)
|
|
||||||
if method_call_expected
|
|
||||||
encoder.text_token match, :error
|
|
||||||
method_call_expected = false
|
|
||||||
else
|
|
||||||
encoder.text_token match, self[1] ? :float : :integer # TODO: send :hex/:octal/:binary
|
|
||||||
end
|
|
||||||
value_expected = false
|
|
||||||
|
|
||||||
elsif match = scan(/ [-+!~^\/]=? | [:;] | [*|&]{1,2}=? | >>? /x)
|
|
||||||
value_expected = true
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
|
|
||||||
elsif value_expected && match = scan(/#{patterns::HEREDOC_OPEN}/o)
|
|
||||||
quote = self[3]
|
|
||||||
delim = self[quote ? 4 : 2]
|
|
||||||
kind = patterns::QUOTE_TO_TYPE[quote]
|
|
||||||
encoder.begin_group kind
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
encoder.end_group kind
|
|
||||||
heredocs ||= [] # create heredocs if empty
|
|
||||||
heredocs << self.class::StringState.new(kind, quote != "'", delim,
|
|
||||||
self[1] == '-' ? :indented : :linestart)
|
|
||||||
value_expected = false
|
|
||||||
|
|
||||||
elsif value_expected && match = scan(/#{patterns::FANCY_STRING_START}/o)
|
|
||||||
kind = patterns::FANCY_STRING_KIND[self[1]]
|
|
||||||
encoder.begin_group kind
|
|
||||||
state = self.class::StringState.new kind, patterns::FANCY_STRING_INTERPRETED[self[1]], self[2]
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
|
|
||||||
elsif value_expected && match = scan(/#{patterns::CHARACTER}/o)
|
|
||||||
value_expected = false
|
|
||||||
encoder.text_token match, :integer
|
|
||||||
|
|
||||||
elsif match = scan(/ %=? | <(?:<|=>?)? | \? /x)
|
|
||||||
value_expected = true
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
|
|
||||||
elsif match = scan(/`/)
|
|
||||||
encoder.begin_group :shell
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
state = self.class::StringState.new :shell, true, match
|
|
||||||
|
|
||||||
elsif match = scan(unicode ? /#{patterns::GLOBAL_VARIABLE}/uo :
|
|
||||||
/#{patterns::GLOBAL_VARIABLE}/o)
|
|
||||||
encoder.text_token match, :global_variable
|
|
||||||
value_expected = false
|
|
||||||
|
|
||||||
elsif match = scan(unicode ? /#{patterns::CLASS_VARIABLE}/uo :
|
|
||||||
/#{patterns::CLASS_VARIABLE}/o)
|
|
||||||
encoder.text_token match, :class_variable
|
|
||||||
value_expected = false
|
|
||||||
|
|
||||||
elsif match = scan(/\\\z/)
|
|
||||||
encoder.text_token match, :space
|
|
||||||
|
|
||||||
else
|
|
||||||
if method_call_expected
|
|
||||||
method_call_expected = false
|
|
||||||
next
|
|
||||||
end
|
|
||||||
unless unicode
|
|
||||||
# check for unicode
|
|
||||||
$DEBUG_BEFORE, $DEBUG = $DEBUG, false
|
|
||||||
begin
|
|
||||||
if check(/./mu).size > 1
|
|
||||||
# seems like we should try again with unicode
|
|
||||||
unicode = true
|
|
||||||
end
|
|
||||||
rescue
|
|
||||||
# bad unicode char; use getch
|
|
||||||
ensure
|
|
||||||
$DEBUG = $DEBUG_BEFORE
|
|
||||||
end
|
|
||||||
next if unicode
|
|
||||||
end
|
|
||||||
|
|
||||||
encoder.text_token getch, :error
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
if last_state
|
|
||||||
state = last_state
|
|
||||||
last_state = nil
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif state == :def_expected
|
|
||||||
if match = scan(unicode ? /(?>#{patterns::METHOD_NAME_EX})(?!\.|::)/uo :
|
|
||||||
/(?>#{patterns::METHOD_NAME_EX})(?!\.|::)/o)
|
|
||||||
encoder.text_token match, :method
|
|
||||||
state = :initial
|
|
||||||
else
|
|
||||||
last_state = :dot_expected
|
|
||||||
state = :initial
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif state == :dot_expected
|
|
||||||
if match = scan(/\.|::/)
|
|
||||||
# invalid definition
|
|
||||||
state = :def_expected
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
else
|
|
||||||
state = :initial
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif state == :module_expected
|
|
||||||
if match = scan(/<</)
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
else
|
|
||||||
state = :initial
|
|
||||||
if match = scan(unicode ? / (?:#{patterns::IDENT}::)* #{patterns::IDENT} /oux :
|
|
||||||
/ (?:#{patterns::IDENT}::)* #{patterns::IDENT} /ox)
|
|
||||||
encoder.text_token match, :class
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif state == :undef_expected
|
|
||||||
state = :undef_comma_expected
|
|
||||||
if match = scan(unicode ? /(?>#{patterns::METHOD_NAME_EX})(?!\.|::)/uo :
|
|
||||||
/(?>#{patterns::METHOD_NAME_EX})(?!\.|::)/o)
|
|
||||||
encoder.text_token match, :method
|
|
||||||
elsif match = scan(/#{patterns::SYMBOL}/o)
|
|
||||||
case delim = match[1]
|
|
||||||
when ?', ?"
|
|
||||||
encoder.begin_group :symbol
|
|
||||||
encoder.text_token ':', :symbol
|
|
||||||
match = delim.chr
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
state = self.class::StringState.new :symbol, delim == ?", match
|
|
||||||
state.next_state = :undef_comma_expected
|
|
||||||
else
|
|
||||||
encoder.text_token match, :symbol
|
|
||||||
end
|
|
||||||
else
|
|
||||||
state = :initial
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif state == :undef_comma_expected
|
|
||||||
if match = scan(/,/)
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
state = :undef_expected
|
|
||||||
else
|
|
||||||
state = :initial
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif state == :alias_expected
|
|
||||||
match = scan(unicode ? /(#{patterns::METHOD_NAME_OR_SYMBOL})([ \t]+)(#{patterns::METHOD_NAME_OR_SYMBOL})/uo :
|
|
||||||
/(#{patterns::METHOD_NAME_OR_SYMBOL})([ \t]+)(#{patterns::METHOD_NAME_OR_SYMBOL})/o)
|
|
||||||
|
|
||||||
if match
|
|
||||||
encoder.text_token self[1], (self[1][0] == ?: ? :symbol : :method)
|
|
||||||
encoder.text_token self[2], :space
|
|
||||||
encoder.text_token self[3], (self[3][0] == ?: ? :symbol : :method)
|
|
||||||
end
|
|
||||||
state = :initial
|
|
||||||
|
|
||||||
else
|
|
||||||
#:nocov:
|
|
||||||
raise_inspect 'Unknown state: %p' % [state], encoder
|
|
||||||
#:nocov:
|
|
||||||
end
|
|
||||||
|
|
||||||
else # StringState
|
|
||||||
|
|
||||||
match = scan_until(state.pattern) || scan_rest
|
|
||||||
unless match.empty?
|
|
||||||
encoder.text_token match, :content
|
|
||||||
break if eos?
|
|
||||||
end
|
|
||||||
|
|
||||||
if state.heredoc && self[1] # end of heredoc
|
|
||||||
match = getch
|
|
||||||
match << scan_until(/$/) unless eos?
|
|
||||||
encoder.text_token match, :delimiter unless match.empty?
|
|
||||||
encoder.end_group state.type
|
|
||||||
state = state.next_state
|
|
||||||
next
|
|
||||||
end
|
|
||||||
|
|
||||||
case match = getch
|
|
||||||
|
|
||||||
when state.delim
|
|
||||||
if state.paren_depth
|
|
||||||
state.paren_depth -= 1
|
|
||||||
if state.paren_depth > 0
|
|
||||||
encoder.text_token match, :content
|
|
||||||
next
|
|
||||||
end
|
|
||||||
end
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
if state.type == :regexp && !eos?
|
|
||||||
match = scan(/#{patterns::REGEXP_MODIFIERS}/o)
|
|
||||||
encoder.text_token match, :modifier unless match.empty?
|
|
||||||
end
|
|
||||||
encoder.end_group state.type
|
|
||||||
value_expected = false
|
|
||||||
state = state.next_state
|
|
||||||
|
|
||||||
when '\\'
|
|
||||||
if state.interpreted
|
|
||||||
if esc = scan(/#{patterns::ESCAPE}/o)
|
|
||||||
encoder.text_token match + esc, :char
|
|
||||||
else
|
|
||||||
encoder.text_token match, :error
|
|
||||||
end
|
|
||||||
else
|
|
||||||
case esc = getch
|
|
||||||
when nil
|
|
||||||
encoder.text_token match, :content
|
|
||||||
when state.delim, '\\'
|
|
||||||
encoder.text_token match + esc, :char
|
|
||||||
else
|
|
||||||
encoder.text_token match + esc, :content
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
when '#'
|
|
||||||
case peek(1)
|
|
||||||
when '{'
|
|
||||||
inline_block_stack ||= []
|
|
||||||
inline_block_stack << [state, inline_block_curly_depth, heredocs]
|
|
||||||
value_expected = true
|
|
||||||
state = :initial
|
|
||||||
inline_block_curly_depth = 1
|
|
||||||
encoder.begin_group :inline
|
|
||||||
encoder.text_token match + getch, :inline_delimiter
|
|
||||||
when '$', '@'
|
|
||||||
encoder.text_token match, :escape
|
|
||||||
last_state = state
|
|
||||||
state = :initial
|
|
||||||
else
|
|
||||||
#:nocov:
|
|
||||||
raise_inspect 'else-case # reached; #%p not handled' % [peek(1)], encoder
|
|
||||||
#:nocov:
|
|
||||||
end
|
|
||||||
|
|
||||||
when state.opening_paren
|
|
||||||
state.paren_depth += 1
|
|
||||||
encoder.text_token match, :content
|
|
||||||
|
|
||||||
else
|
|
||||||
#:nocov
|
|
||||||
raise_inspect 'else-case " reached; %p not handled, state = %p' % [match, state], encoder
|
|
||||||
#:nocov:
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
# cleaning up
|
|
||||||
if state.is_a? StringState
|
|
||||||
encoder.end_group state.type
|
|
||||||
end
|
|
||||||
|
|
||||||
if options[:keep_state]
|
|
||||||
if state.is_a?(StringState) && state.heredoc
|
|
||||||
(heredocs ||= []).unshift state
|
|
||||||
state = :initial
|
|
||||||
elsif heredocs && heredocs.empty?
|
|
||||||
heredocs = nil
|
|
||||||
end
|
|
||||||
@state = state, heredocs
|
|
||||||
end
|
|
||||||
|
|
||||||
if inline_block_stack
|
|
||||||
until inline_block_stack.empty?
|
|
||||||
state, = *inline_block_stack.pop
|
|
||||||
encoder.end_group :inline
|
|
||||||
encoder.end_group state.type
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,175 +0,0 @@
|
||||||
# encoding: utf-8
|
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
module Ruby::Patterns # :nodoc: all
|
|
||||||
|
|
||||||
KEYWORDS = %w[
|
|
||||||
and def end in or unless begin
|
|
||||||
defined? ensure module redo super until
|
|
||||||
BEGIN break do next rescue then
|
|
||||||
when END case else for retry
|
|
||||||
while alias class elsif if not return
|
|
||||||
undef yield
|
|
||||||
]
|
|
||||||
|
|
||||||
# See http://murfy.de/ruby-constants.
|
|
||||||
PREDEFINED_CONSTANTS = %w[
|
|
||||||
nil true false self
|
|
||||||
DATA ARGV ARGF ENV
|
|
||||||
FALSE TRUE NIL
|
|
||||||
STDERR STDIN STDOUT
|
|
||||||
TOPLEVEL_BINDING
|
|
||||||
RUBY_COPYRIGHT RUBY_DESCRIPTION RUBY_ENGINE RUBY_PATCHLEVEL
|
|
||||||
RUBY_PLATFORM RUBY_RELEASE_DATE RUBY_REVISION RUBY_VERSION
|
|
||||||
__FILE__ __LINE__ __ENCODING__
|
|
||||||
]
|
|
||||||
|
|
||||||
IDENT_KIND = WordList.new(:ident).
|
|
||||||
add(KEYWORDS, :keyword).
|
|
||||||
add(PREDEFINED_CONSTANTS, :predefined_constant)
|
|
||||||
|
|
||||||
KEYWORD_NEW_STATE = WordList.new(:initial).
|
|
||||||
add(%w[ def ], :def_expected).
|
|
||||||
add(%w[ undef ], :undef_expected).
|
|
||||||
add(%w[ alias ], :alias_expected).
|
|
||||||
add(%w[ class module ], :module_expected)
|
|
||||||
|
|
||||||
IDENT = 'ä'[/[[:alpha:]]/] == 'ä' ? /[[:alpha:]_][[:alnum:]_]*/ : /[^\W\d]\w*/
|
|
||||||
|
|
||||||
METHOD_NAME = / #{IDENT} [?!]? /ox
|
|
||||||
METHOD_NAME_OPERATOR = /
|
|
||||||
\*\*? # multiplication and power
|
|
||||||
| [-+~]@? # plus, minus, tilde with and without at sign
|
|
||||||
| [\/%&|^`] # division, modulo or format strings, and, or, xor, system
|
|
||||||
| \[\]=? # array getter and setter
|
|
||||||
| << | >> # append or shift left, shift right
|
|
||||||
| <=?>? | >=? # comparison, rocket operator
|
|
||||||
| ===? | =~ # simple equality, case equality, match
|
|
||||||
| ![~=@]? # negation with and without at sign, not-equal and not-match
|
|
||||||
/ox
|
|
||||||
METHOD_SUFFIX = / (?: [?!] | = (?![~>]|=(?!>)) ) /x
|
|
||||||
METHOD_NAME_EX = / #{IDENT} #{METHOD_SUFFIX}? | #{METHOD_NAME_OPERATOR} /ox
|
|
||||||
METHOD_AFTER_DOT = / #{IDENT} [?!]? | #{METHOD_NAME_OPERATOR} /ox
|
|
||||||
INSTANCE_VARIABLE = / @ #{IDENT} /ox
|
|
||||||
CLASS_VARIABLE = / @@ #{IDENT} /ox
|
|
||||||
OBJECT_VARIABLE = / @@? #{IDENT} /ox
|
|
||||||
GLOBAL_VARIABLE = / \$ (?: #{IDENT} | [1-9]\d* | 0\w* | [~&+`'=\/,;_.<>!@$?*":\\] | -[a-zA-Z_0-9] ) /ox
|
|
||||||
PREFIX_VARIABLE = / #{GLOBAL_VARIABLE} | #{OBJECT_VARIABLE} /ox
|
|
||||||
VARIABLE = / @?@? #{IDENT} | #{GLOBAL_VARIABLE} /ox
|
|
||||||
|
|
||||||
QUOTE_TO_TYPE = {
|
|
||||||
'`' => :shell,
|
|
||||||
'/'=> :regexp,
|
|
||||||
}
|
|
||||||
QUOTE_TO_TYPE.default = :string
|
|
||||||
|
|
||||||
REGEXP_MODIFIERS = /[mousenix]*/
|
|
||||||
|
|
||||||
DECIMAL = /\d+(?:_\d+)*/
|
|
||||||
OCTAL = /0_?[0-7]+(?:_[0-7]+)*/
|
|
||||||
HEXADECIMAL = /0x[0-9A-Fa-f]+(?:_[0-9A-Fa-f]+)*/
|
|
||||||
BINARY = /0b[01]+(?:_[01]+)*/
|
|
||||||
|
|
||||||
EXPONENT = / [eE] [+-]? #{DECIMAL} /ox
|
|
||||||
FLOAT_SUFFIX = / #{EXPONENT} | \. #{DECIMAL} #{EXPONENT}? /ox
|
|
||||||
FLOAT_OR_INT = / #{DECIMAL} (?: #{FLOAT_SUFFIX} () )? /ox
|
|
||||||
NUMERIC = / (?: (?=0) (?: #{OCTAL} | #{HEXADECIMAL} | #{BINARY} ) | #{FLOAT_OR_INT} ) /ox
|
|
||||||
|
|
||||||
SYMBOL = /
|
|
||||||
:
|
|
||||||
(?:
|
|
||||||
#{METHOD_NAME_EX}
|
|
||||||
| #{PREFIX_VARIABLE}
|
|
||||||
| ['"]
|
|
||||||
)
|
|
||||||
/ox
|
|
||||||
METHOD_NAME_OR_SYMBOL = / #{METHOD_NAME_EX} | #{SYMBOL} /ox
|
|
||||||
|
|
||||||
SIMPLE_ESCAPE = /
|
|
||||||
[abefnrstv]
|
|
||||||
| [0-7]{1,3}
|
|
||||||
| x[0-9A-Fa-f]{1,2}
|
|
||||||
| .
|
|
||||||
/mx
|
|
||||||
|
|
||||||
CONTROL_META_ESCAPE = /
|
|
||||||
(?: M-|C-|c )
|
|
||||||
(?: \\ (?: M-|C-|c ) )*
|
|
||||||
(?: [^\\] | \\ #{SIMPLE_ESCAPE} )?
|
|
||||||
/mox
|
|
||||||
|
|
||||||
ESCAPE = /
|
|
||||||
#{CONTROL_META_ESCAPE} | #{SIMPLE_ESCAPE}
|
|
||||||
/mox
|
|
||||||
|
|
||||||
CHARACTER = /
|
|
||||||
\?
|
|
||||||
(?:
|
|
||||||
[^\s\\]
|
|
||||||
| \\ #{ESCAPE}
|
|
||||||
)
|
|
||||||
/mox
|
|
||||||
|
|
||||||
# NOTE: This is not completely correct, but
|
|
||||||
# nobody needs heredoc delimiters ending with \n.
|
|
||||||
HEREDOC_OPEN = /
|
|
||||||
<< (-)? # $1 = float
|
|
||||||
(?:
|
|
||||||
( [A-Za-z_0-9]+ ) # $2 = delim
|
|
||||||
|
|
|
||||||
( ["'`\/] ) # $3 = quote, type
|
|
||||||
( [^\n]*? ) \3 # $4 = delim
|
|
||||||
)
|
|
||||||
/mx
|
|
||||||
|
|
||||||
RUBYDOC = /
|
|
||||||
=begin (?!\S)
|
|
||||||
.*?
|
|
||||||
(?: \Z | ^=end (?!\S) [^\n]* )
|
|
||||||
/mx
|
|
||||||
|
|
||||||
DATA = /
|
|
||||||
__END__$
|
|
||||||
.*?
|
|
||||||
(?: \Z | (?=^\#CODE) )
|
|
||||||
/mx
|
|
||||||
|
|
||||||
RUBYDOC_OR_DATA = / #{RUBYDOC} | #{DATA} /xo
|
|
||||||
|
|
||||||
# Checks for a valid value to follow. This enables
|
|
||||||
# value_expected in method calls without parentheses.
|
|
||||||
VALUE_FOLLOWS = /
|
|
||||||
(?>[ \t\f\v]+)
|
|
||||||
(?:
|
|
||||||
[%\/][^\s=]
|
|
||||||
| <<-?\S
|
|
||||||
| [-+] \d
|
|
||||||
| #{CHARACTER}
|
|
||||||
)
|
|
||||||
/ox
|
|
||||||
KEYWORDS_EXPECTING_VALUE = WordList.new.add(%w[
|
|
||||||
and end in or unless begin
|
|
||||||
defined? ensure redo super until
|
|
||||||
break do next rescue then
|
|
||||||
when case else for retry
|
|
||||||
while elsif if not return
|
|
||||||
yield
|
|
||||||
])
|
|
||||||
|
|
||||||
FANCY_STRING_START = / % ( [QqrsWwx] | (?![a-zA-Z0-9]) ) ([^a-zA-Z0-9]) /x
|
|
||||||
FANCY_STRING_KIND = Hash.new(:string).merge({
|
|
||||||
'r' => :regexp,
|
|
||||||
's' => :symbol,
|
|
||||||
'x' => :shell,
|
|
||||||
})
|
|
||||||
FANCY_STRING_INTERPRETED = Hash.new(true).merge({
|
|
||||||
'q' => false,
|
|
||||||
's' => false,
|
|
||||||
'w' => false,
|
|
||||||
})
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,71 +0,0 @@
|
||||||
# encoding: utf-8
|
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
class Ruby
|
|
||||||
|
|
||||||
class StringState < Struct.new :type, :interpreted, :delim, :heredoc,
|
|
||||||
:opening_paren, :paren_depth, :pattern, :next_state # :nodoc: all
|
|
||||||
|
|
||||||
CLOSING_PAREN = Hash[ *%w[
|
|
||||||
( )
|
|
||||||
[ ]
|
|
||||||
< >
|
|
||||||
{ }
|
|
||||||
] ].each { |k,v| k.freeze; v.freeze } # debug, if I try to change it with <<
|
|
||||||
|
|
||||||
STRING_PATTERN = Hash.new do |h, k|
|
|
||||||
delim, interpreted = *k
|
|
||||||
# delim = delim.dup # workaround for old Ruby
|
|
||||||
delim_pattern = Regexp.escape(delim)
|
|
||||||
if closing_paren = CLOSING_PAREN[delim]
|
|
||||||
delim_pattern << Regexp.escape(closing_paren)
|
|
||||||
end
|
|
||||||
delim_pattern << '\\\\' unless delim == '\\'
|
|
||||||
|
|
||||||
# special_escapes =
|
|
||||||
# case interpreted
|
|
||||||
# when :regexp_symbols
|
|
||||||
# '| [|?*+(){}\[\].^$]'
|
|
||||||
# end
|
|
||||||
|
|
||||||
h[k] =
|
|
||||||
if interpreted && delim != '#'
|
|
||||||
/ (?= [#{delim_pattern}] | \# [{$@] ) /mx
|
|
||||||
else
|
|
||||||
/ (?= [#{delim_pattern}] ) /mx
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
def initialize kind, interpreted, delim, heredoc = false
|
|
||||||
if heredoc
|
|
||||||
pattern = heredoc_pattern delim, interpreted, heredoc == :indented
|
|
||||||
delim = nil
|
|
||||||
else
|
|
||||||
pattern = STRING_PATTERN[ [delim, interpreted] ]
|
|
||||||
if closing_paren = CLOSING_PAREN[delim]
|
|
||||||
opening_paren = delim
|
|
||||||
delim = closing_paren
|
|
||||||
paren_depth = 1
|
|
||||||
end
|
|
||||||
end
|
|
||||||
super kind, interpreted, delim, heredoc, opening_paren, paren_depth, pattern, :initial
|
|
||||||
end
|
|
||||||
|
|
||||||
def heredoc_pattern delim, interpreted, indented
|
|
||||||
# delim = delim.dup # workaround for old Ruby
|
|
||||||
delim_pattern = Regexp.escape(delim)
|
|
||||||
delim_pattern = / (?:\A|\n) #{ '(?>[ \t]*)' if indented } #{ Regexp.new delim_pattern } $ /x
|
|
||||||
if interpreted
|
|
||||||
/ (?= #{delim_pattern}() | \\ | \# [{$@] ) /mx # $1 set == end of heredoc
|
|
||||||
else
|
|
||||||
/ (?= #{delim_pattern}() | \\ ) /mx
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,174 +0,0 @@
|
||||||
module CodeRay module Scanners
|
|
||||||
|
|
||||||
# by Josh Goebel
|
|
||||||
class SQL < Scanner
|
|
||||||
|
|
||||||
register_for :sql
|
|
||||||
|
|
||||||
KEYWORDS = %w(
|
|
||||||
all and any as before begin between by case check collate
|
|
||||||
each else end exists
|
|
||||||
for foreign from full group having if in inner is join
|
|
||||||
like not of on or order outer over references
|
|
||||||
then to union using values when where
|
|
||||||
left right distinct
|
|
||||||
)
|
|
||||||
|
|
||||||
OBJECTS = %w(
|
|
||||||
database databases table tables column columns fields index constraint
|
|
||||||
constraints transaction function procedure row key view trigger
|
|
||||||
)
|
|
||||||
|
|
||||||
COMMANDS = %w(
|
|
||||||
add alter comment create delete drop grant insert into select update set
|
|
||||||
show prompt begin commit rollback replace truncate
|
|
||||||
)
|
|
||||||
|
|
||||||
PREDEFINED_TYPES = %w(
|
|
||||||
char varchar varchar2 enum binary text tinytext mediumtext
|
|
||||||
longtext blob tinyblob mediumblob longblob timestamp
|
|
||||||
date time datetime year double decimal float int
|
|
||||||
integer tinyint mediumint bigint smallint unsigned bit
|
|
||||||
bool boolean hex bin oct
|
|
||||||
)
|
|
||||||
|
|
||||||
PREDEFINED_FUNCTIONS = %w( sum cast substring abs pi count min max avg now )
|
|
||||||
|
|
||||||
DIRECTIVES = %w(
|
|
||||||
auto_increment unique default charset initially deferred
|
|
||||||
deferrable cascade immediate read write asc desc after
|
|
||||||
primary foreign return engine
|
|
||||||
)
|
|
||||||
|
|
||||||
PREDEFINED_CONSTANTS = %w( null true false )
|
|
||||||
|
|
||||||
IDENT_KIND = WordList::CaseIgnoring.new(:ident).
|
|
||||||
add(KEYWORDS, :keyword).
|
|
||||||
add(OBJECTS, :type).
|
|
||||||
add(COMMANDS, :class).
|
|
||||||
add(PREDEFINED_TYPES, :predefined_type).
|
|
||||||
add(PREDEFINED_CONSTANTS, :predefined_constant).
|
|
||||||
add(PREDEFINED_FUNCTIONS, :predefined).
|
|
||||||
add(DIRECTIVES, :directive)
|
|
||||||
|
|
||||||
ESCAPE = / [rbfntv\n\\\/'"] | x[a-fA-F0-9]{1,2} | [0-7]{1,3} | . /mx
|
|
||||||
UNICODE_ESCAPE = / u[a-fA-F0-9]{4} | U[a-fA-F0-9]{8} /x
|
|
||||||
|
|
||||||
STRING_PREFIXES = /[xnb]|_\w+/i
|
|
||||||
|
|
||||||
def scan_tokens encoder, options
|
|
||||||
|
|
||||||
state = :initial
|
|
||||||
string_type = nil
|
|
||||||
string_content = ''
|
|
||||||
name_expected = false
|
|
||||||
|
|
||||||
until eos?
|
|
||||||
|
|
||||||
if state == :initial
|
|
||||||
|
|
||||||
if match = scan(/ \s+ | \\\n /x)
|
|
||||||
encoder.text_token match, :space
|
|
||||||
|
|
||||||
elsif match = scan(/(?:--\s?|#).*/)
|
|
||||||
encoder.text_token match, :comment
|
|
||||||
|
|
||||||
elsif match = scan(%r( /\* (!)? (?: .*? \*/ | .* ) )mx)
|
|
||||||
encoder.text_token match, self[1] ? :directive : :comment
|
|
||||||
|
|
||||||
elsif match = scan(/ [*\/=<>:;,!&^|()\[\]{}~%] | [-+\.](?!\d) /x)
|
|
||||||
name_expected = true if match == '.' && check(/[A-Za-z_]/)
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
|
|
||||||
elsif match = scan(/(#{STRING_PREFIXES})?([`"'])/o)
|
|
||||||
prefix = self[1]
|
|
||||||
string_type = self[2]
|
|
||||||
encoder.begin_group :string
|
|
||||||
encoder.text_token prefix, :modifier if prefix
|
|
||||||
match = string_type
|
|
||||||
state = :string
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
|
|
||||||
elsif match = scan(/ @? [A-Za-z_][A-Za-z_0-9]* /x)
|
|
||||||
encoder.text_token match, name_expected ? :ident : (match[0] == ?@ ? :variable : IDENT_KIND[match])
|
|
||||||
name_expected = false
|
|
||||||
|
|
||||||
elsif match = scan(/0[xX][0-9A-Fa-f]+/)
|
|
||||||
encoder.text_token match, :hex
|
|
||||||
|
|
||||||
elsif match = scan(/0[0-7]+(?![89.eEfF])/)
|
|
||||||
encoder.text_token match, :octal
|
|
||||||
|
|
||||||
elsif match = scan(/[-+]?(?>\d+)(?![.eEfF])/)
|
|
||||||
encoder.text_token match, :integer
|
|
||||||
|
|
||||||
elsif match = scan(/[-+]?(?:\d[fF]|\d*\.\d+(?:[eE][+-]?\d+)?|\d+[eE][+-]?\d+)/)
|
|
||||||
encoder.text_token match, :float
|
|
||||||
|
|
||||||
elsif match = scan(/\\N/)
|
|
||||||
encoder.text_token match, :predefined_constant
|
|
||||||
|
|
||||||
else
|
|
||||||
encoder.text_token getch, :error
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif state == :string
|
|
||||||
if match = scan(/[^\\"'`]+/)
|
|
||||||
string_content << match
|
|
||||||
next
|
|
||||||
elsif match = scan(/["'`]/)
|
|
||||||
if string_type == match
|
|
||||||
if peek(1) == string_type # doubling means escape
|
|
||||||
string_content << string_type << getch
|
|
||||||
next
|
|
||||||
end
|
|
||||||
unless string_content.empty?
|
|
||||||
encoder.text_token string_content, :content
|
|
||||||
string_content = ''
|
|
||||||
end
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
encoder.end_group :string
|
|
||||||
state = :initial
|
|
||||||
string_type = nil
|
|
||||||
else
|
|
||||||
string_content << match
|
|
||||||
end
|
|
||||||
elsif match = scan(/ \\ (?: #{ESCAPE} | #{UNICODE_ESCAPE} ) /mox)
|
|
||||||
unless string_content.empty?
|
|
||||||
encoder.text_token string_content, :content
|
|
||||||
string_content = ''
|
|
||||||
end
|
|
||||||
encoder.text_token match, :char
|
|
||||||
elsif match = scan(/ \\ . /mox)
|
|
||||||
string_content << match
|
|
||||||
next
|
|
||||||
elsif match = scan(/ \\ | $ /x)
|
|
||||||
unless string_content.empty?
|
|
||||||
encoder.text_token string_content, :content
|
|
||||||
string_content = ''
|
|
||||||
end
|
|
||||||
encoder.text_token match, :error
|
|
||||||
state = :initial
|
|
||||||
else
|
|
||||||
raise "else case \" reached; %p not handled." % peek(1), encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
else
|
|
||||||
raise 'else-case reached', encoder
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
if state == :string
|
|
||||||
encoder.end_group state
|
|
||||||
end
|
|
||||||
|
|
||||||
encoder
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end end
|
|
|
@ -1,26 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
# Scanner for plain text.
|
|
||||||
#
|
|
||||||
# Yields just one token of the kind :plain.
|
|
||||||
#
|
|
||||||
# Alias: +plaintext+, +plain+
|
|
||||||
class Text < Scanner
|
|
||||||
|
|
||||||
register_for :text
|
|
||||||
title 'Plain text'
|
|
||||||
|
|
||||||
KINDS_NOT_LOC = [:plain] # :nodoc:
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def scan_tokens encoder, options
|
|
||||||
encoder.text_token string, :plain
|
|
||||||
encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,17 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
load :html
|
|
||||||
|
|
||||||
# Scanner for XML.
|
|
||||||
#
|
|
||||||
# Currently this is the same scanner as Scanners::HTML.
|
|
||||||
class XML < HTML
|
|
||||||
|
|
||||||
register_for :xml
|
|
||||||
file_extension 'xml'
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,140 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Scanners
|
|
||||||
|
|
||||||
# Scanner for YAML.
|
|
||||||
#
|
|
||||||
# Based on the YAML scanner from Syntax by Jamis Buck.
|
|
||||||
class YAML < Scanner
|
|
||||||
|
|
||||||
register_for :yaml
|
|
||||||
file_extension 'yml'
|
|
||||||
|
|
||||||
KINDS_NOT_LOC = :all
|
|
||||||
|
|
||||||
protected
|
|
||||||
|
|
||||||
def scan_tokens encoder, options
|
|
||||||
|
|
||||||
state = :initial
|
|
||||||
key_indent = string_indent = 0
|
|
||||||
|
|
||||||
until eos?
|
|
||||||
|
|
||||||
key_indent = nil if bol?
|
|
||||||
|
|
||||||
if match = scan(/ +[\t ]*/)
|
|
||||||
encoder.text_token match, :space
|
|
||||||
|
|
||||||
elsif match = scan(/\n+/)
|
|
||||||
encoder.text_token match, :space
|
|
||||||
state = :initial if match.index(?\n)
|
|
||||||
|
|
||||||
elsif match = scan(/#.*/)
|
|
||||||
encoder.text_token match, :comment
|
|
||||||
|
|
||||||
elsif bol? and case
|
|
||||||
when match = scan(/---|\.\.\./)
|
|
||||||
encoder.begin_group :head
|
|
||||||
encoder.text_token match, :head
|
|
||||||
encoder.end_group :head
|
|
||||||
next
|
|
||||||
when match = scan(/%.*/)
|
|
||||||
encoder.text_token match, :doctype
|
|
||||||
next
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif state == :value and case
|
|
||||||
when !check(/(?:"[^"]*")(?=: |:$)/) && match = scan(/"/)
|
|
||||||
encoder.begin_group :string
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
encoder.text_token match, :content if match = scan(/ [^"\\]* (?: \\. [^"\\]* )* /mx)
|
|
||||||
encoder.text_token match, :delimiter if match = scan(/"/)
|
|
||||||
encoder.end_group :string
|
|
||||||
next
|
|
||||||
when match = scan(/[|>][-+]?/)
|
|
||||||
encoder.begin_group :string
|
|
||||||
encoder.text_token match, :delimiter
|
|
||||||
string_indent = key_indent || column(pos - match.size) - 1
|
|
||||||
encoder.text_token matched, :content if scan(/(?:\n+ {#{string_indent + 1}}.*)+/)
|
|
||||||
encoder.end_group :string
|
|
||||||
next
|
|
||||||
when match = scan(/(?![!"*&]).+?(?=$|\s+#)/)
|
|
||||||
encoder.begin_group :string
|
|
||||||
encoder.text_token match, :content
|
|
||||||
string_indent = key_indent || column(pos - match.size) - 1
|
|
||||||
encoder.text_token matched, :content if scan(/(?:\n+ {#{string_indent + 1}}.*)+/)
|
|
||||||
encoder.end_group :string
|
|
||||||
next
|
|
||||||
end
|
|
||||||
|
|
||||||
elsif case
|
|
||||||
when match = scan(/[-:](?= |$)/)
|
|
||||||
state = :value if state == :colon && (match == ':' || match == '-')
|
|
||||||
state = :value if state == :initial && match == '-'
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
next
|
|
||||||
when match = scan(/[,{}\[\]]/)
|
|
||||||
encoder.text_token match, :operator
|
|
||||||
next
|
|
||||||
when state == :initial && match = scan(/[\w.() ]*\S(?= *:(?: |$))/)
|
|
||||||
encoder.text_token match, :key
|
|
||||||
key_indent = column(pos - match.size) - 1
|
|
||||||
state = :colon
|
|
||||||
next
|
|
||||||
when match = scan(/(?:"[^"\n]*"|'[^'\n]*')(?= *:(?: |$))/)
|
|
||||||
encoder.begin_group :key
|
|
||||||
encoder.text_token match[0,1], :delimiter
|
|
||||||
encoder.text_token match[1..-2], :content
|
|
||||||
encoder.text_token match[-1,1], :delimiter
|
|
||||||
encoder.end_group :key
|
|
||||||
key_indent = column(pos - match.size) - 1
|
|
||||||
state = :colon
|
|
||||||
next
|
|
||||||
when match = scan(/(![\w\/]+)(:([\w:]+))?/)
|
|
||||||
encoder.text_token self[1], :type
|
|
||||||
if self[2]
|
|
||||||
encoder.text_token ':', :operator
|
|
||||||
encoder.text_token self[3], :class
|
|
||||||
end
|
|
||||||
next
|
|
||||||
when match = scan(/&\S+/)
|
|
||||||
encoder.text_token match, :variable
|
|
||||||
next
|
|
||||||
when match = scan(/\*\w+/)
|
|
||||||
encoder.text_token match, :global_variable
|
|
||||||
next
|
|
||||||
when match = scan(/<</)
|
|
||||||
encoder.text_token match, :class_variable
|
|
||||||
next
|
|
||||||
when match = scan(/\d\d:\d\d:\d\d/)
|
|
||||||
encoder.text_token match, :octal
|
|
||||||
next
|
|
||||||
when match = scan(/\d\d\d\d-\d\d-\d\d\s\d\d:\d\d:\d\d(\.\d+)? [-+]\d\d:\d\d/)
|
|
||||||
encoder.text_token match, :octal
|
|
||||||
next
|
|
||||||
when match = scan(/:\w+/)
|
|
||||||
encoder.text_token match, :symbol
|
|
||||||
next
|
|
||||||
when match = scan(/[^:\s]+(:(?! |$)[^:\s]*)* .*/)
|
|
||||||
encoder.text_token match, :error
|
|
||||||
next
|
|
||||||
when match = scan(/[^:\s]+(:(?! |$)[^:\s]*)*/)
|
|
||||||
encoder.text_token match, :error
|
|
||||||
next
|
|
||||||
end
|
|
||||||
|
|
||||||
else
|
|
||||||
raise if eos?
|
|
||||||
encoder.text_token getch, :error
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,23 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
|
|
||||||
# This module holds the Style class and its subclasses.
|
|
||||||
#
|
|
||||||
# See Plugin.
|
|
||||||
module Styles
|
|
||||||
extend PluginHost
|
|
||||||
plugin_path File.dirname(__FILE__), 'styles'
|
|
||||||
|
|
||||||
# Base class for styles.
|
|
||||||
#
|
|
||||||
# Styles are used by Encoders::HTML to colorize tokens.
|
|
||||||
class Style
|
|
||||||
extend Plugin
|
|
||||||
plugin_host Styles
|
|
||||||
|
|
||||||
DEFAULT_OPTIONS = { } # :nodoc:
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
|
@ -1,7 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Styles
|
|
||||||
|
|
||||||
default :alpha
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,143 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
module Styles
|
|
||||||
|
|
||||||
# A colorful theme using CSS 3 colors (with alpha channel).
|
|
||||||
class Alpha < Style
|
|
||||||
|
|
||||||
register_for :alpha
|
|
||||||
|
|
||||||
code_background = 'hsl(0,0%,95%)'
|
|
||||||
numbers_background = 'hsl(180,65%,90%)'
|
|
||||||
border_color = 'silver'
|
|
||||||
normal_color = 'black'
|
|
||||||
|
|
||||||
CSS_MAIN_STYLES = <<-MAIN # :nodoc:
|
|
||||||
.CodeRay {
|
|
||||||
background-color: #{code_background};
|
|
||||||
border: 1px solid #{border_color};
|
|
||||||
color: #{normal_color};
|
|
||||||
}
|
|
||||||
.CodeRay pre {
|
|
||||||
margin: 0px;
|
|
||||||
}
|
|
||||||
|
|
||||||
span.CodeRay { white-space: pre; border: 0px; padding: 2px; }
|
|
||||||
|
|
||||||
table.CodeRay { border-collapse: collapse; width: 100%; padding: 2px; }
|
|
||||||
table.CodeRay td { padding: 2px 4px; vertical-align: top; }
|
|
||||||
|
|
||||||
.CodeRay .line-numbers {
|
|
||||||
background-color: #{numbers_background};
|
|
||||||
color: gray;
|
|
||||||
text-align: right;
|
|
||||||
-webkit-user-select: none;
|
|
||||||
-moz-user-select: none;
|
|
||||||
user-select: none;
|
|
||||||
}
|
|
||||||
.CodeRay .line-numbers a {
|
|
||||||
background-color: #{numbers_background} !important;
|
|
||||||
color: gray !important;
|
|
||||||
text-decoration: none !important;
|
|
||||||
}
|
|
||||||
.CodeRay .line-numbers a:target { color: blue !important; }
|
|
||||||
.CodeRay .line-numbers .highlighted { color: red !important; }
|
|
||||||
.CodeRay .line-numbers .highlighted a { color: red !important; }
|
|
||||||
.CodeRay span.line-numbers { padding: 0px 4px; }
|
|
||||||
.CodeRay .line { display: block; float: left; width: 100%; }
|
|
||||||
.CodeRay .code { width: 100%; }
|
|
||||||
.CodeRay .code pre { overflow: auto; }
|
|
||||||
MAIN
|
|
||||||
|
|
||||||
TOKEN_COLORS = <<-'TOKENS'
|
|
||||||
.debug { color: white !important; background: blue !important; }
|
|
||||||
|
|
||||||
.annotation { color:#007 }
|
|
||||||
.attribute-name { color:#b48 }
|
|
||||||
.attribute-value { color:#700 }
|
|
||||||
.binary { color:#509 }
|
|
||||||
.char .content { color:#D20 }
|
|
||||||
.char .delimiter { color:#710 }
|
|
||||||
.char { color:#D20 }
|
|
||||||
.class { color:#B06; font-weight:bold }
|
|
||||||
.class-variable { color:#369 }
|
|
||||||
.color { color:#0A0 }
|
|
||||||
.comment { color:#777 }
|
|
||||||
.comment .char { color:#444 }
|
|
||||||
.comment .delimiter { color:#444 }
|
|
||||||
.complex { color:#A08 }
|
|
||||||
.constant { color:#036; font-weight:bold }
|
|
||||||
.decorator { color:#B0B }
|
|
||||||
.definition { color:#099; font-weight:bold }
|
|
||||||
.delimiter { color:black }
|
|
||||||
.directive { color:#088; font-weight:bold }
|
|
||||||
.doc { color:#970 }
|
|
||||||
.doc-string { color:#D42; font-weight:bold }
|
|
||||||
.doctype { color:#34b }
|
|
||||||
.entity { color:#800; font-weight:bold }
|
|
||||||
.error { color:#F00; background-color:#FAA }
|
|
||||||
.escape { color:#666 }
|
|
||||||
.exception { color:#C00; font-weight:bold }
|
|
||||||
.float { color:#60E }
|
|
||||||
.function { color:#06B; font-weight:bold }
|
|
||||||
.global-variable { color:#d70 }
|
|
||||||
.hex { color:#02b }
|
|
||||||
.imaginary { color:#f00 }
|
|
||||||
.include { color:#B44; font-weight:bold }
|
|
||||||
.inline { background-color: hsla(0,0%,0%,0.07); color: black }
|
|
||||||
.inline-delimiter { font-weight: bold; color: #666 }
|
|
||||||
.instance-variable { color:#33B }
|
|
||||||
.integer { color:#00D }
|
|
||||||
.key .char { color: #60f }
|
|
||||||
.key .delimiter { color: #404 }
|
|
||||||
.key { color: #606 }
|
|
||||||
.keyword { color:#080; font-weight:bold }
|
|
||||||
.label { color:#970; font-weight:bold }
|
|
||||||
.local-variable { color:#963 }
|
|
||||||
.namespace { color:#707; font-weight:bold }
|
|
||||||
.octal { color:#40E }
|
|
||||||
.operator { }
|
|
||||||
.predefined { color:#369; font-weight:bold }
|
|
||||||
.predefined-constant { color:#069 }
|
|
||||||
.predefined-type { color:#0a5; font-weight:bold }
|
|
||||||
.preprocessor { color:#579 }
|
|
||||||
.pseudo-class { color:#00C; font-weight:bold }
|
|
||||||
.regexp .content { color:#808 }
|
|
||||||
.regexp .delimiter { color:#404 }
|
|
||||||
.regexp .modifier { color:#C2C }
|
|
||||||
.regexp { background-color:hsla(300,100%,50%,0.06); }
|
|
||||||
.reserved { color:#080; font-weight:bold }
|
|
||||||
.shell .content { color:#2B2 }
|
|
||||||
.shell .delimiter { color:#161 }
|
|
||||||
.shell { background-color:hsla(120,100%,50%,0.06); }
|
|
||||||
.string .char { color: #b0b }
|
|
||||||
.string .content { color: #D20 }
|
|
||||||
.string .delimiter { color: #710 }
|
|
||||||
.string .modifier { color: #E40 }
|
|
||||||
.string { background-color:hsla(0,100%,50%,0.05); }
|
|
||||||
.symbol .content { color:#A60 }
|
|
||||||
.symbol .delimiter { color:#630 }
|
|
||||||
.symbol { color:#A60 }
|
|
||||||
.tag { color:#070 }
|
|
||||||
.type { color:#339; font-weight:bold }
|
|
||||||
.value { color: #088; }
|
|
||||||
.variable { color:#037 }
|
|
||||||
|
|
||||||
.insert { background: hsla(120,100%,50%,0.12) }
|
|
||||||
.delete { background: hsla(0,100%,50%,0.12) }
|
|
||||||
.change { color: #bbf; background: #007; }
|
|
||||||
.head { color: #f8f; background: #505 }
|
|
||||||
.head .filename { color: white; }
|
|
||||||
|
|
||||||
.delete .eyecatcher { background-color: hsla(0,100%,50%,0.2); border: 1px solid hsla(0,100%,45%,0.5); margin: -1px; border-bottom: none; border-top-left-radius: 5px; border-top-right-radius: 5px; }
|
|
||||||
.insert .eyecatcher { background-color: hsla(120,100%,50%,0.2); border: 1px solid hsla(120,100%,25%,0.5); margin: -1px; border-top: none; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; }
|
|
||||||
|
|
||||||
.insert .insert { color: #0c0; background:transparent; font-weight:bold }
|
|
||||||
.delete .delete { color: #c00; background:transparent; font-weight:bold }
|
|
||||||
.change .change { color: #88f }
|
|
||||||
.head .head { color: #f4f }
|
|
||||||
TOKENS
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,90 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
|
|
||||||
# A Hash of all known token kinds and their associated CSS classes.
|
|
||||||
TokenKinds = Hash.new do |h, k|
|
|
||||||
warn 'Undefined Token kind: %p' % [k] if $CODERAY_DEBUG
|
|
||||||
false
|
|
||||||
end
|
|
||||||
|
|
||||||
# speedup
|
|
||||||
TokenKinds.compare_by_identity if TokenKinds.respond_to? :compare_by_identity
|
|
||||||
|
|
||||||
TokenKinds.update( # :nodoc:
|
|
||||||
:annotation => 'annotation',
|
|
||||||
:attribute_name => 'attribute-name',
|
|
||||||
:attribute_value => 'attribute-value',
|
|
||||||
:binary => 'bin',
|
|
||||||
:char => 'char',
|
|
||||||
:class => 'class',
|
|
||||||
:class_variable => 'class-variable',
|
|
||||||
:color => 'color',
|
|
||||||
:comment => 'comment',
|
|
||||||
:complex => 'complex',
|
|
||||||
:constant => 'constant',
|
|
||||||
:content => 'content',
|
|
||||||
:debug => 'debug',
|
|
||||||
:decorator => 'decorator',
|
|
||||||
:definition => 'definition',
|
|
||||||
:delimiter => 'delimiter',
|
|
||||||
:directive => 'directive',
|
|
||||||
:doc => 'doc',
|
|
||||||
:doctype => 'doctype',
|
|
||||||
:doc_string => 'doc-string',
|
|
||||||
:entity => 'entity',
|
|
||||||
:error => 'error',
|
|
||||||
:escape => 'escape',
|
|
||||||
:exception => 'exception',
|
|
||||||
:filename => 'filename',
|
|
||||||
:float => 'float',
|
|
||||||
:function => 'function',
|
|
||||||
:global_variable => 'global-variable',
|
|
||||||
:hex => 'hex',
|
|
||||||
:imaginary => 'imaginary',
|
|
||||||
:important => 'important',
|
|
||||||
:include => 'include',
|
|
||||||
:inline => 'inline',
|
|
||||||
:inline_delimiter => 'inline-delimiter',
|
|
||||||
:instance_variable => 'instance-variable',
|
|
||||||
:integer => 'integer',
|
|
||||||
:key => 'key',
|
|
||||||
:keyword => 'keyword',
|
|
||||||
:label => 'label',
|
|
||||||
:local_variable => 'local-variable',
|
|
||||||
:modifier => 'modifier',
|
|
||||||
:namespace => 'namespace',
|
|
||||||
:octal => 'octal',
|
|
||||||
:predefined => 'predefined',
|
|
||||||
:predefined_constant => 'predefined-constant',
|
|
||||||
:predefined_type => 'predefined-type',
|
|
||||||
:preprocessor => 'preprocessor',
|
|
||||||
:pseudo_class => 'pseudo-class',
|
|
||||||
:regexp => 'regexp',
|
|
||||||
:reserved => 'reserved',
|
|
||||||
:shell => 'shell',
|
|
||||||
:string => 'string',
|
|
||||||
:symbol => 'symbol',
|
|
||||||
:tag => 'tag',
|
|
||||||
:type => 'type',
|
|
||||||
:value => 'value',
|
|
||||||
:variable => 'variable',
|
|
||||||
|
|
||||||
:change => 'change',
|
|
||||||
:delete => 'delete',
|
|
||||||
:head => 'head',
|
|
||||||
:insert => 'insert',
|
|
||||||
|
|
||||||
:eyecatcher => 'eyecatcher',
|
|
||||||
|
|
||||||
:ident => false,
|
|
||||||
:operator => false,
|
|
||||||
|
|
||||||
:space => false,
|
|
||||||
:plain => false
|
|
||||||
)
|
|
||||||
|
|
||||||
TokenKinds[:method] = TokenKinds[:function]
|
|
||||||
TokenKinds[:escape] = TokenKinds[:delimiter]
|
|
||||||
TokenKinds[:docstring] = TokenKinds[:comment]
|
|
||||||
|
|
||||||
TokenKinds.freeze
|
|
||||||
end
|
|
|
@ -1,215 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
|
|
||||||
# GZip library for writing and reading token dumps.
|
|
||||||
autoload :GZip, 'coderay/helpers/gzip'
|
|
||||||
|
|
||||||
# = Tokens TODO: Rewrite!
|
|
||||||
#
|
|
||||||
# The Tokens class represents a list of tokens returnd from
|
|
||||||
# a Scanner.
|
|
||||||
#
|
|
||||||
# A token is not a special object, just a two-element Array
|
|
||||||
# consisting of
|
|
||||||
# * the _token_ _text_ (the original source of the token in a String) or
|
|
||||||
# a _token_ _action_ (begin_group, end_group, begin_line, end_line)
|
|
||||||
# * the _token_ _kind_ (a Symbol representing the type of the token)
|
|
||||||
#
|
|
||||||
# A token looks like this:
|
|
||||||
#
|
|
||||||
# ['# It looks like this', :comment]
|
|
||||||
# ['3.1415926', :float]
|
|
||||||
# ['$^', :error]
|
|
||||||
#
|
|
||||||
# Some scanners also yield sub-tokens, represented by special
|
|
||||||
# token actions, namely begin_group and end_group.
|
|
||||||
#
|
|
||||||
# The Ruby scanner, for example, splits "a string" into:
|
|
||||||
#
|
|
||||||
# [
|
|
||||||
# [:begin_group, :string],
|
|
||||||
# ['"', :delimiter],
|
|
||||||
# ['a string', :content],
|
|
||||||
# ['"', :delimiter],
|
|
||||||
# [:end_group, :string]
|
|
||||||
# ]
|
|
||||||
#
|
|
||||||
# Tokens is the interface between Scanners and Encoders:
|
|
||||||
# The input is split and saved into a Tokens object. The Encoder
|
|
||||||
# then builds the output from this object.
|
|
||||||
#
|
|
||||||
# Thus, the syntax below becomes clear:
|
|
||||||
#
|
|
||||||
# CodeRay.scan('price = 2.59', :ruby).html
|
|
||||||
# # the Tokens object is here -------^
|
|
||||||
#
|
|
||||||
# See how small it is? ;)
|
|
||||||
#
|
|
||||||
# Tokens gives you the power to handle pre-scanned code very easily:
|
|
||||||
# You can convert it to a webpage, a YAML file, or dump it into a gzip'ed string
|
|
||||||
# that you put in your DB.
|
|
||||||
#
|
|
||||||
# It also allows you to generate tokens directly (without using a scanner),
|
|
||||||
# to load them from a file, and still use any Encoder that CodeRay provides.
|
|
||||||
class Tokens < Array
|
|
||||||
|
|
||||||
# The Scanner instance that created the tokens.
|
|
||||||
attr_accessor :scanner
|
|
||||||
|
|
||||||
# Encode the tokens using encoder.
|
|
||||||
#
|
|
||||||
# encoder can be
|
|
||||||
# * a symbol like :html oder :statistic
|
|
||||||
# * an Encoder class
|
|
||||||
# * an Encoder object
|
|
||||||
#
|
|
||||||
# options are passed to the encoder.
|
|
||||||
def encode encoder, options = {}
|
|
||||||
encoder = Encoders[encoder].new options if encoder.respond_to? :to_sym
|
|
||||||
encoder.encode_tokens self, options
|
|
||||||
end
|
|
||||||
|
|
||||||
# Turn tokens into a string by concatenating them.
|
|
||||||
def to_s
|
|
||||||
encode CodeRay::Encoders::Encoder.new
|
|
||||||
end
|
|
||||||
|
|
||||||
# Redirects unknown methods to encoder calls.
|
|
||||||
#
|
|
||||||
# For example, if you call +tokens.html+, the HTML encoder
|
|
||||||
# is used to highlight the tokens.
|
|
||||||
def method_missing meth, options = {}
|
|
||||||
encode meth, options
|
|
||||||
rescue PluginHost::PluginNotFound
|
|
||||||
super
|
|
||||||
end
|
|
||||||
|
|
||||||
# Split the tokens into parts of the given +sizes+.
|
|
||||||
#
|
|
||||||
# The result will be an Array of Tokens objects. The parts have
|
|
||||||
# the text size specified by the parameter. In addition, each
|
|
||||||
# part closes all opened tokens. This is useful to insert tokens
|
|
||||||
# betweem them.
|
|
||||||
#
|
|
||||||
# This method is used by @Scanner#tokenize@ when called with an Array
|
|
||||||
# of source strings. The Diff encoder uses it for inline highlighting.
|
|
||||||
def split_into_parts *sizes
|
|
||||||
parts = []
|
|
||||||
opened = []
|
|
||||||
content = nil
|
|
||||||
part = Tokens.new
|
|
||||||
part_size = 0
|
|
||||||
size = sizes.first
|
|
||||||
i = 0
|
|
||||||
for item in self
|
|
||||||
case content
|
|
||||||
when nil
|
|
||||||
content = item
|
|
||||||
when String
|
|
||||||
if size && part_size + content.size > size # token must be cut
|
|
||||||
if part_size < size # some part of the token goes into this part
|
|
||||||
content = content.dup # content may no be safe to change
|
|
||||||
part << content.slice!(0, size - part_size) << item
|
|
||||||
end
|
|
||||||
# close all open groups and lines...
|
|
||||||
closing = opened.reverse.flatten.map do |content_or_kind|
|
|
||||||
case content_or_kind
|
|
||||||
when :begin_group
|
|
||||||
:end_group
|
|
||||||
when :begin_line
|
|
||||||
:end_line
|
|
||||||
else
|
|
||||||
content_or_kind
|
|
||||||
end
|
|
||||||
end
|
|
||||||
part.concat closing
|
|
||||||
begin
|
|
||||||
parts << part
|
|
||||||
part = Tokens.new
|
|
||||||
size = sizes[i += 1]
|
|
||||||
end until size.nil? || size > 0
|
|
||||||
# ...and open them again.
|
|
||||||
part.concat opened.flatten
|
|
||||||
part_size = 0
|
|
||||||
redo unless content.empty?
|
|
||||||
else
|
|
||||||
part << content << item
|
|
||||||
part_size += content.size
|
|
||||||
end
|
|
||||||
content = nil
|
|
||||||
when Symbol
|
|
||||||
case content
|
|
||||||
when :begin_group, :begin_line
|
|
||||||
opened << [content, item]
|
|
||||||
when :end_group, :end_line
|
|
||||||
opened.pop
|
|
||||||
else
|
|
||||||
raise ArgumentError, 'Unknown token action: %p, kind = %p' % [content, item]
|
|
||||||
end
|
|
||||||
part << content << item
|
|
||||||
content = nil
|
|
||||||
else
|
|
||||||
raise ArgumentError, 'Token input junk: %p, kind = %p' % [content, item]
|
|
||||||
end
|
|
||||||
end
|
|
||||||
parts << part
|
|
||||||
parts << Tokens.new while parts.size < sizes.size
|
|
||||||
parts
|
|
||||||
end
|
|
||||||
|
|
||||||
# Dumps the object into a String that can be saved
|
|
||||||
# in files or databases.
|
|
||||||
#
|
|
||||||
# The dump is created with Marshal.dump;
|
|
||||||
# In addition, it is gzipped using GZip.gzip.
|
|
||||||
#
|
|
||||||
# The returned String object includes Undumping
|
|
||||||
# so it has an #undump method. See Tokens.load.
|
|
||||||
#
|
|
||||||
# You can configure the level of compression,
|
|
||||||
# but the default value 7 should be what you want
|
|
||||||
# in most cases as it is a good compromise between
|
|
||||||
# speed and compression rate.
|
|
||||||
#
|
|
||||||
# See GZip module.
|
|
||||||
def dump gzip_level = 7
|
|
||||||
dump = Marshal.dump self
|
|
||||||
dump = GZip.gzip dump, gzip_level
|
|
||||||
dump.extend Undumping
|
|
||||||
end
|
|
||||||
|
|
||||||
# Return the actual number of tokens.
|
|
||||||
def count
|
|
||||||
size / 2
|
|
||||||
end
|
|
||||||
|
|
||||||
# Include this module to give an object an #undump
|
|
||||||
# method.
|
|
||||||
#
|
|
||||||
# The string returned by Tokens.dump includes Undumping.
|
|
||||||
module Undumping
|
|
||||||
# Calls Tokens.load with itself.
|
|
||||||
def undump
|
|
||||||
Tokens.load self
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Undump the object using Marshal.load, then
|
|
||||||
# unzip it using GZip.gunzip.
|
|
||||||
#
|
|
||||||
# The result is commonly a Tokens object, but
|
|
||||||
# this is not guaranteed.
|
|
||||||
def Tokens.load dump
|
|
||||||
dump = GZip.gunzip dump
|
|
||||||
@dump = Marshal.load dump
|
|
||||||
end
|
|
||||||
|
|
||||||
alias text_token push
|
|
||||||
def begin_group kind; push :begin_group, kind end
|
|
||||||
def end_group kind; push :end_group, kind end
|
|
||||||
def begin_line kind; push :begin_line, kind end
|
|
||||||
def end_line kind; push :end_line, kind end
|
|
||||||
alias tokens concat
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
|
@ -1,55 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
|
|
||||||
# The result of a scan operation is a TokensProxy, but should act like Tokens.
|
|
||||||
#
|
|
||||||
# This proxy makes it possible to use the classic CodeRay.scan.encode API
|
|
||||||
# while still providing the benefits of direct streaming.
|
|
||||||
class TokensProxy
|
|
||||||
|
|
||||||
attr_accessor :input, :lang, :options, :block
|
|
||||||
|
|
||||||
# Create a new TokensProxy with the arguments of CodeRay.scan.
|
|
||||||
def initialize input, lang, options = {}, block = nil
|
|
||||||
@input = input
|
|
||||||
@lang = lang
|
|
||||||
@options = options
|
|
||||||
@block = block
|
|
||||||
end
|
|
||||||
|
|
||||||
# Call CodeRay.encode if +encoder+ is a Symbol;
|
|
||||||
# otherwise, convert the receiver to tokens and call encoder.encode_tokens.
|
|
||||||
def encode encoder, options = {}
|
|
||||||
if encoder.respond_to? :to_sym
|
|
||||||
CodeRay.encode(input, lang, encoder, options)
|
|
||||||
else
|
|
||||||
encoder.encode_tokens tokens, options
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Tries to call encode;
|
|
||||||
# delegates to tokens otherwise.
|
|
||||||
def method_missing method, *args, &blk
|
|
||||||
encode method.to_sym, *args
|
|
||||||
rescue PluginHost::PluginNotFound
|
|
||||||
tokens.send(method, *args, &blk)
|
|
||||||
end
|
|
||||||
|
|
||||||
# The (cached) result of the tokenized input; a Tokens instance.
|
|
||||||
def tokens
|
|
||||||
@tokens ||= scanner.tokenize(input)
|
|
||||||
end
|
|
||||||
|
|
||||||
# A (cached) scanner instance to use for the scan task.
|
|
||||||
def scanner
|
|
||||||
@scanner ||= CodeRay.scanner(lang, options, &block)
|
|
||||||
end
|
|
||||||
|
|
||||||
# Overwrite Struct#each.
|
|
||||||
def each *args, &blk
|
|
||||||
tokens.each(*args, &blk)
|
|
||||||
self
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
|
@ -1,3 +0,0 @@
|
||||||
module CodeRay
|
|
||||||
VERSION = '1.0.0'
|
|
||||||
end
|
|
|
@ -1,304 +0,0 @@
|
||||||
# encoding: utf-8
|
|
||||||
require 'test/unit'
|
|
||||||
require File.expand_path('../../lib/assert_warning', __FILE__)
|
|
||||||
|
|
||||||
$:.unshift File.expand_path('../../../lib', __FILE__)
|
|
||||||
require 'coderay'
|
|
||||||
|
|
||||||
class BasicTest < Test::Unit::TestCase
|
|
||||||
|
|
||||||
def test_version
|
|
||||||
assert_nothing_raised do
|
|
||||||
assert_match(/\A\d\.\d\.\d?\z/, CodeRay::VERSION)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
RUBY_TEST_CODE = 'puts "Hello, World!"'
|
|
||||||
|
|
||||||
RUBY_TEST_TOKENS = [
|
|
||||||
['puts', :ident],
|
|
||||||
[' ', :space],
|
|
||||||
[:begin_group, :string],
|
|
||||||
['"', :delimiter],
|
|
||||||
['Hello, World!', :content],
|
|
||||||
['"', :delimiter],
|
|
||||||
[:end_group, :string]
|
|
||||||
].flatten
|
|
||||||
def test_simple_scan
|
|
||||||
assert_nothing_raised do
|
|
||||||
assert_equal RUBY_TEST_TOKENS, CodeRay.scan(RUBY_TEST_CODE, :ruby).tokens
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
RUBY_TEST_HTML = 'puts <span class="string"><span class="delimiter">"</span>' +
|
|
||||||
'<span class="content">Hello, World!</span><span class="delimiter">"</span></span>'
|
|
||||||
def test_simple_highlight
|
|
||||||
assert_nothing_raised do
|
|
||||||
assert_equal RUBY_TEST_HTML, CodeRay.scan(RUBY_TEST_CODE, :ruby).html
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_scan_file
|
|
||||||
CodeRay.scan_file __FILE__
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_encode
|
|
||||||
assert_equal 1, CodeRay.encode('test', :python, :count)
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_encode_tokens
|
|
||||||
assert_equal 1, CodeRay.encode_tokens(CodeRay::Tokens['test', :string], :count)
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_encode_file
|
|
||||||
assert_equal File.read(__FILE__), CodeRay.encode_file(__FILE__, :text)
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_highlight
|
|
||||||
assert_match '<pre>test</pre>', CodeRay.highlight('test', :python)
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_highlight_file
|
|
||||||
assert_match "require <span class=\"string\"><span class=\"delimiter\">'</span><span class=\"content\">test/unit</span><span class=\"delimiter\">'</span></span>\n", CodeRay.highlight_file(__FILE__)
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_duo
|
|
||||||
assert_equal(RUBY_TEST_CODE,
|
|
||||||
CodeRay::Duo[:plain, :text].highlight(RUBY_TEST_CODE))
|
|
||||||
assert_equal(RUBY_TEST_CODE,
|
|
||||||
CodeRay::Duo[:plain => :text].highlight(RUBY_TEST_CODE))
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_duo_stream
|
|
||||||
assert_equal(RUBY_TEST_CODE,
|
|
||||||
CodeRay::Duo[:plain, :text].highlight(RUBY_TEST_CODE, :stream => true))
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_comment_filter
|
|
||||||
assert_equal <<-EXPECTED, CodeRay.scan(<<-INPUT, :ruby).comment_filter.text
|
|
||||||
#!/usr/bin/env ruby
|
|
||||||
|
|
||||||
code
|
|
||||||
|
|
||||||
more code
|
|
||||||
EXPECTED
|
|
||||||
#!/usr/bin/env ruby
|
|
||||||
=begin
|
|
||||||
A multi-line comment.
|
|
||||||
=end
|
|
||||||
code
|
|
||||||
# A single-line comment.
|
|
||||||
more code # and another comment, in-line.
|
|
||||||
INPUT
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_lines_of_code
|
|
||||||
assert_equal 2, CodeRay.scan(<<-INPUT, :ruby).lines_of_code
|
|
||||||
#!/usr/bin/env ruby
|
|
||||||
=begin
|
|
||||||
A multi-line comment.
|
|
||||||
=end
|
|
||||||
code
|
|
||||||
# A single-line comment.
|
|
||||||
more code # and another comment, in-line.
|
|
||||||
INPUT
|
|
||||||
rHTML = <<-RHTML
|
|
||||||
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
|
|
||||||
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
|
|
||||||
|
|
||||||
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
|
|
||||||
<head>
|
|
||||||
<meta http-equiv="content-type" content="text/html;charset=UTF-8" />
|
|
||||||
<title><%= controller.controller_name.titleize %>: <%= controller.action_name %></title>
|
|
||||||
<%= stylesheet_link_tag 'scaffold' %>
|
|
||||||
</head>
|
|
||||||
<body>
|
|
||||||
|
|
||||||
<p style="color: green"><%= flash[:notice] %></p>
|
|
||||||
|
|
||||||
<div id="main">
|
|
||||||
<%= yield %>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
</body>
|
|
||||||
</html>
|
|
||||||
RHTML
|
|
||||||
assert_equal 0, CodeRay.scan(rHTML, :html).lines_of_code
|
|
||||||
assert_equal 0, CodeRay.scan(rHTML, :php).lines_of_code
|
|
||||||
assert_equal 0, CodeRay.scan(rHTML, :yaml).lines_of_code
|
|
||||||
assert_equal 4, CodeRay.scan(rHTML, :erb).lines_of_code
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_list_of_encoders
|
|
||||||
assert_kind_of(Array, CodeRay::Encoders.list)
|
|
||||||
assert CodeRay::Encoders.list.include?(:count)
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_list_of_scanners
|
|
||||||
assert_kind_of(Array, CodeRay::Scanners.list)
|
|
||||||
assert CodeRay::Scanners.list.include?(:text)
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_token_kinds
|
|
||||||
assert_kind_of Hash, CodeRay::TokenKinds
|
|
||||||
for kind, css_class in CodeRay::TokenKinds
|
|
||||||
assert_kind_of Symbol, kind
|
|
||||||
if css_class != false
|
|
||||||
assert_kind_of String, css_class, "TokenKinds[%p] == %p" % [kind, css_class]
|
|
||||||
end
|
|
||||||
end
|
|
||||||
assert_equal 'reserved', CodeRay::TokenKinds[:reserved]
|
|
||||||
assert_warning 'Undefined Token kind: :shibboleet' do
|
|
||||||
assert_equal false, CodeRay::TokenKinds[:shibboleet]
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
class Milk < CodeRay::Encoders::Encoder
|
|
||||||
FILE_EXTENSION = 'cocoa'
|
|
||||||
end
|
|
||||||
|
|
||||||
class HoneyBee < CodeRay::Encoders::Encoder
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_encoder_file_extension
|
|
||||||
assert_nothing_raised do
|
|
||||||
assert_equal 'html', CodeRay::Encoders::Page::FILE_EXTENSION
|
|
||||||
assert_equal 'cocoa', Milk::FILE_EXTENSION
|
|
||||||
assert_equal 'cocoa', Milk.new.file_extension
|
|
||||||
assert_equal 'honeybee', HoneyBee::FILE_EXTENSION
|
|
||||||
assert_equal 'honeybee', HoneyBee.new.file_extension
|
|
||||||
end
|
|
||||||
assert_raise NameError do
|
|
||||||
HoneyBee::MISSING_CONSTANT
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_encoder_tokens
|
|
||||||
encoder = CodeRay::Encoders::Encoder.new
|
|
||||||
encoder.send :setup, {}
|
|
||||||
assert_raise(ArgumentError) { encoder.token :strange, '' }
|
|
||||||
encoder.token 'test', :debug
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_encoder_deprecated_interface
|
|
||||||
encoder = CodeRay::Encoders::Encoder.new
|
|
||||||
encoder.send :setup, {}
|
|
||||||
assert_warning 'Using old Tokens#<< interface.' do
|
|
||||||
encoder << ['test', :content]
|
|
||||||
end
|
|
||||||
assert_raise ArgumentError do
|
|
||||||
encoder << [:strange, :input]
|
|
||||||
end
|
|
||||||
assert_raise ArgumentError do
|
|
||||||
encoder.encode_tokens [['test', :token]]
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
def encoder_token_interface_deprecation_warning_given
|
|
||||||
CodeRay::Encoders::Encoder.send :class_variable_get, :@@CODERAY_TOKEN_INTERFACE_DEPRECATION_WARNING_GIVEN
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_scanner_file_extension
|
|
||||||
assert_equal 'rb', CodeRay::Scanners::Ruby.file_extension
|
|
||||||
assert_equal 'rb', CodeRay::Scanners::Ruby.new.file_extension
|
|
||||||
assert_equal 'java', CodeRay::Scanners::Java.file_extension
|
|
||||||
assert_equal 'java', CodeRay::Scanners::Java.new.file_extension
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_scanner_lang
|
|
||||||
assert_equal :ruby, CodeRay::Scanners::Ruby.lang
|
|
||||||
assert_equal :ruby, CodeRay::Scanners::Ruby.new.lang
|
|
||||||
assert_equal :java, CodeRay::Scanners::Java.lang
|
|
||||||
assert_equal :java, CodeRay::Scanners::Java.new.lang
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_scanner_tokenize
|
|
||||||
assert_equal ['foo', :plain], CodeRay::Scanners::Plain.new.tokenize('foo')
|
|
||||||
assert_equal [['foo', :plain], ['bar', :plain]], CodeRay::Scanners::Plain.new.tokenize(['foo', 'bar'])
|
|
||||||
CodeRay::Scanners::Plain.new.tokenize 42
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_scanner_tokens
|
|
||||||
scanner = CodeRay::Scanners::Plain.new
|
|
||||||
scanner.tokenize('foo')
|
|
||||||
assert_equal ['foo', :plain], scanner.tokens
|
|
||||||
scanner.string = ''
|
|
||||||
assert_equal ['', :plain], scanner.tokens
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_scanner_line_and_column
|
|
||||||
scanner = CodeRay::Scanners::Plain.new "foo\nbär+quux"
|
|
||||||
assert_equal 0, scanner.pos
|
|
||||||
assert_equal 1, scanner.line
|
|
||||||
assert_equal 1, scanner.column
|
|
||||||
scanner.scan(/foo/)
|
|
||||||
assert_equal 3, scanner.pos
|
|
||||||
assert_equal 1, scanner.line
|
|
||||||
assert_equal 4, scanner.column
|
|
||||||
scanner.scan(/\n/)
|
|
||||||
assert_equal 4, scanner.pos
|
|
||||||
assert_equal 2, scanner.line
|
|
||||||
assert_equal 1, scanner.column
|
|
||||||
scanner.scan(/b/)
|
|
||||||
assert_equal 5, scanner.pos
|
|
||||||
assert_equal 2, scanner.line
|
|
||||||
assert_equal 2, scanner.column
|
|
||||||
scanner.scan(/a/)
|
|
||||||
assert_equal 5, scanner.pos
|
|
||||||
assert_equal 2, scanner.line
|
|
||||||
assert_equal 2, scanner.column
|
|
||||||
scanner.scan(/ä/)
|
|
||||||
assert_equal 7, scanner.pos
|
|
||||||
assert_equal 2, scanner.line
|
|
||||||
assert_equal 4, scanner.column
|
|
||||||
scanner.scan(/r/)
|
|
||||||
assert_equal 8, scanner.pos
|
|
||||||
assert_equal 2, scanner.line
|
|
||||||
assert_equal 5, scanner.column
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_scanner_use_subclasses
|
|
||||||
assert_raise NotImplementedError do
|
|
||||||
CodeRay::Scanners::Scanner.new
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
class InvalidScanner < CodeRay::Scanners::Scanner
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_scanner_scan_tokens
|
|
||||||
assert_raise NotImplementedError do
|
|
||||||
InvalidScanner.new.tokenize ''
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
class RaisingScanner < CodeRay::Scanners::Scanner
|
|
||||||
def scan_tokens encoder, options
|
|
||||||
raise_inspect 'message', [], :initial
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_scanner_raise_inspect
|
|
||||||
assert_raise CodeRay::Scanners::Scanner::ScanError do
|
|
||||||
RaisingScanner.new.tokenize ''
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_scan_a_frozen_string
|
|
||||||
assert_nothing_raised do
|
|
||||||
CodeRay.scan RUBY_VERSION, :ruby
|
|
||||||
CodeRay.scan RUBY_VERSION, :plain
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_scan_a_non_string
|
|
||||||
assert_nothing_raised do
|
|
||||||
CodeRay.scan 42, :ruby
|
|
||||||
CodeRay.scan nil, :ruby
|
|
||||||
CodeRay.scan self, :ruby
|
|
||||||
CodeRay.encode ENV.to_hash, :ruby, :page
|
|
||||||
CodeRay.highlight CodeRay, :plain
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
|
@ -1,130 +0,0 @@
|
||||||
require 'test/unit'
|
|
||||||
|
|
||||||
$:.unshift File.expand_path('../../../lib', __FILE__)
|
|
||||||
require 'coderay'
|
|
||||||
|
|
||||||
class ExamplesTest < Test::Unit::TestCase
|
|
||||||
|
|
||||||
def test_examples
|
|
||||||
# output as HTML div (using inline CSS styles)
|
|
||||||
div = CodeRay.scan('puts "Hello, world!"', :ruby).div
|
|
||||||
assert_equal <<-DIV, div
|
|
||||||
<div class="CodeRay">
|
|
||||||
<div class="code"><pre>puts <span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">"</span><span style="color:#D20">Hello, world!</span><span style="color:#710">"</span></span></pre></div>
|
|
||||||
</div>
|
|
||||||
DIV
|
|
||||||
|
|
||||||
# ...with line numbers
|
|
||||||
div = CodeRay.scan(<<-CODE.chomp, :ruby).div(:line_numbers => :table)
|
|
||||||
5.times do
|
|
||||||
puts 'Hello, world!'
|
|
||||||
end
|
|
||||||
CODE
|
|
||||||
assert_equal <<-DIV, div
|
|
||||||
<table class="CodeRay"><tr>
|
|
||||||
<td class="line-numbers" title="double click to toggle" ondblclick="with (this.firstChild.style) { display = (display == '') ? 'none' : '' }"><pre><a href="#n1" name="n1">1</a>
|
|
||||||
<a href="#n2" name="n2">2</a>
|
|
||||||
<a href="#n3" name="n3">3</a>
|
|
||||||
</pre></td>
|
|
||||||
<td class="code"><pre><span style="color:#00D">5</span>.times <span style="color:#080;font-weight:bold">do</span>
|
|
||||||
puts <span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">'</span><span style="color:#D20">Hello, world!</span><span style="color:#710">'</span></span>
|
|
||||||
<span style="color:#080;font-weight:bold">end</span></pre></td>
|
|
||||||
</tr></table>
|
|
||||||
DIV
|
|
||||||
|
|
||||||
# output as standalone HTML page (using CSS classes)
|
|
||||||
page = CodeRay.scan('puts "Hello, world!"', :ruby).page
|
|
||||||
assert page[<<-PAGE]
|
|
||||||
<body style="background-color: white;">
|
|
||||||
|
|
||||||
<table class="CodeRay"><tr>
|
|
||||||
<td class="line-numbers" title="double click to toggle" ondblclick="with (this.firstChild.style) { display = (display == '') ? 'none' : '' }"><pre>
|
|
||||||
</pre></td>
|
|
||||||
<td class="code"><pre>puts <span class="string"><span class="delimiter">"</span><span class="content">Hello, world!</span><span class="delimiter">"</span></span></pre></td>
|
|
||||||
</tr></table>
|
|
||||||
|
|
||||||
</body>
|
|
||||||
PAGE
|
|
||||||
|
|
||||||
# keep scanned tokens for later use
|
|
||||||
tokens = CodeRay.scan('{ "just": "an", "example": 42 }', :json)
|
|
||||||
assert_kind_of CodeRay::TokensProxy, tokens
|
|
||||||
|
|
||||||
assert_equal ["{", :operator, " ", :space, :begin_group, :key,
|
|
||||||
"\"", :delimiter, "just", :content, "\"", :delimiter,
|
|
||||||
:end_group, :key, ":", :operator, " ", :space,
|
|
||||||
:begin_group, :string, "\"", :delimiter, "an", :content,
|
|
||||||
"\"", :delimiter, :end_group, :string, ",", :operator,
|
|
||||||
" ", :space, :begin_group, :key, "\"", :delimiter,
|
|
||||||
"example", :content, "\"", :delimiter, :end_group, :key,
|
|
||||||
":", :operator, " ", :space, "42", :integer,
|
|
||||||
" ", :space, "}", :operator], tokens.tokens
|
|
||||||
|
|
||||||
# produce a token statistic
|
|
||||||
assert_equal <<-STATISTIC, tokens.statistic
|
|
||||||
|
|
||||||
Code Statistics
|
|
||||||
|
|
||||||
Tokens 26
|
|
||||||
Non-Whitespace 15
|
|
||||||
Bytes Total 31
|
|
||||||
|
|
||||||
Token Types (7):
|
|
||||||
type count ratio size (average)
|
|
||||||
-------------------------------------------------------------
|
|
||||||
TOTAL 26 100.00 % 1.2
|
|
||||||
delimiter 6 23.08 % 1.0
|
|
||||||
operator 5 19.23 % 1.0
|
|
||||||
space 5 19.23 % 1.0
|
|
||||||
key 4 15.38 % 0.0
|
|
||||||
:begin_group 3 11.54 % 0.0
|
|
||||||
:end_group 3 11.54 % 0.0
|
|
||||||
content 3 11.54 % 4.3
|
|
||||||
string 2 7.69 % 0.0
|
|
||||||
integer 1 3.85 % 2.0
|
|
||||||
|
|
||||||
STATISTIC
|
|
||||||
|
|
||||||
# count the tokens
|
|
||||||
assert_equal 26, tokens.count
|
|
||||||
|
|
||||||
# produce a HTML div, but with CSS classes
|
|
||||||
div = tokens.div(:css => :class)
|
|
||||||
assert_equal <<-DIV, div
|
|
||||||
<div class="CodeRay">
|
|
||||||
<div class="code"><pre>{ <span class="key"><span class="delimiter">"</span><span class="content">just</span><span class="delimiter">"</span></span>: <span class="string"><span class="delimiter">"</span><span class="content">an</span><span class="delimiter">"</span></span>, <span class="key"><span class="delimiter">"</span><span class="content">example</span><span class="delimiter">"</span></span>: <span class="integer">42</span> }</pre></div>
|
|
||||||
</div>
|
|
||||||
DIV
|
|
||||||
|
|
||||||
# highlight a file (HTML div); guess the file type base on the extension
|
|
||||||
require 'coderay/helpers/file_type'
|
|
||||||
assert_equal :ruby, CodeRay::FileType[__FILE__]
|
|
||||||
|
|
||||||
# get a new scanner for Python
|
|
||||||
python_scanner = CodeRay.scanner :python
|
|
||||||
assert_kind_of CodeRay::Scanners::Python, python_scanner
|
|
||||||
|
|
||||||
# get a new encoder for terminal
|
|
||||||
terminal_encoder = CodeRay.encoder :term
|
|
||||||
assert_kind_of CodeRay::Encoders::Terminal, terminal_encoder
|
|
||||||
|
|
||||||
# scanning into tokens
|
|
||||||
tokens = python_scanner.tokenize 'import this; # The Zen of Python'
|
|
||||||
assert_equal ["import", :keyword, " ", :space, "this", :include,
|
|
||||||
";", :operator, " ", :space, "# The Zen of Python", :comment], tokens
|
|
||||||
|
|
||||||
# format the tokens
|
|
||||||
term = terminal_encoder.encode_tokens(tokens)
|
|
||||||
assert_equal "\e[1;31mimport\e[0m \e[33mthis\e[0m; \e[37m# The Zen of Python\e[0m", term
|
|
||||||
|
|
||||||
# re-using scanner and encoder
|
|
||||||
ruby_highlighter = CodeRay::Duo[:ruby, :div]
|
|
||||||
div = ruby_highlighter.encode('puts "Hello, world!"')
|
|
||||||
assert_equal <<-DIV, div
|
|
||||||
<div class="CodeRay">
|
|
||||||
<div class="code"><pre>puts <span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">"</span><span style="color:#D20">Hello, world!</span><span style="color:#710">"</span></span></pre></div>
|
|
||||||
</div>
|
|
||||||
DIV
|
|
||||||
end
|
|
||||||
|
|
||||||
end
|
|
|
@ -1,84 +0,0 @@
|
||||||
require 'test/unit'
|
|
||||||
require File.expand_path('../../lib/assert_warning', __FILE__)
|
|
||||||
|
|
||||||
$:.unshift File.expand_path('../../../lib', __FILE__)
|
|
||||||
require 'coderay'
|
|
||||||
|
|
||||||
begin
|
|
||||||
require 'rubygems' unless defined? Gem
|
|
||||||
gem 'RedCloth', '>= 4.0.3' rescue nil
|
|
||||||
require 'redcloth'
|
|
||||||
rescue LoadError
|
|
||||||
warn 'RedCloth not found - skipping for_redcloth tests.'
|
|
||||||
undef RedCloth if defined? RedCloth
|
|
||||||
end
|
|
||||||
|
|
||||||
class BasicTest < Test::Unit::TestCase
|
|
||||||
|
|
||||||
def test_for_redcloth
|
|
||||||
require 'coderay/for_redcloth'
|
|
||||||
assert_equal "<p><span lang=\"ruby\" class=\"CodeRay\">puts <span style=\"background-color:hsla(0,100%,50%,0.05)\"><span style=\"color:#710\">"</span><span style=\"color:#D20\">Hello, World!</span><span style=\"color:#710\">"</span></span></span></p>",
|
|
||||||
RedCloth.new('@[ruby]puts "Hello, World!"@').to_html
|
|
||||||
assert_equal <<-BLOCKCODE.chomp,
|
|
||||||
<div lang="ruby" class="CodeRay">
|
|
||||||
<div class="code"><pre>puts <span style="background-color:hsla(0,100%,50%,0.05)"><span style="color:#710">"</span><span style="color:#D20">Hello, World!</span><span style="color:#710">"</span></span></pre></div>
|
|
||||||
</div>
|
|
||||||
BLOCKCODE
|
|
||||||
RedCloth.new('bc[ruby]. puts "Hello, World!"').to_html
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_for_redcloth_no_lang
|
|
||||||
require 'coderay/for_redcloth'
|
|
||||||
assert_equal "<p><code>puts \"Hello, World!\"</code></p>",
|
|
||||||
RedCloth.new('@puts "Hello, World!"@').to_html
|
|
||||||
assert_equal <<-BLOCKCODE.chomp,
|
|
||||||
<pre><code>puts \"Hello, World!\"</code></pre>
|
|
||||||
BLOCKCODE
|
|
||||||
RedCloth.new('bc. puts "Hello, World!"').to_html
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_for_redcloth_style
|
|
||||||
require 'coderay/for_redcloth'
|
|
||||||
assert_equal <<-BLOCKCODE.chomp,
|
|
||||||
<pre style=\"color: red;\"><code style=\"color: red;\">puts \"Hello, World!\"</code></pre>
|
|
||||||
BLOCKCODE
|
|
||||||
RedCloth.new('bc{color: red}. puts "Hello, World!"').to_html
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_for_redcloth_escapes
|
|
||||||
require 'coderay/for_redcloth'
|
|
||||||
assert_equal '<p><span lang="ruby" class="CodeRay">></span></p>',
|
|
||||||
RedCloth.new('@[ruby]>@').to_html
|
|
||||||
assert_equal <<-BLOCKCODE.chomp,
|
|
||||||
<div lang="ruby" class="CodeRay">
|
|
||||||
<div class="code"><pre>&</pre></div>
|
|
||||||
</div>
|
|
||||||
BLOCKCODE
|
|
||||||
RedCloth.new('bc[ruby]. &').to_html
|
|
||||||
end
|
|
||||||
|
|
||||||
def test_for_redcloth_escapes2
|
|
||||||
require 'coderay/for_redcloth'
|
|
||||||
assert_equal "<p><span lang=\"c\" class=\"CodeRay\"><span style=\"color:#579\">#include</span> <span style=\"color:#B44;font-weight:bold\"><test.h></span></span></p>",
|
|
||||||
RedCloth.new('@[c]#include <test.h>@').to_html
|
|
||||||
end
|
|
||||||
|
|
||||||
# See http://jgarber.lighthouseapp.com/projects/13054/tickets/124-code-markup-does-not-allow-brackets.
|
|
||||||
def test_for_redcloth_false_positive
|
|
||||||
require 'coderay/for_redcloth'
|
|
||||||
assert_warning 'CodeRay::Scanners could not load plugin :project; falling back to :text' do
|
|
||||||
assert_equal '<p><code>[project]_dff.skjd</code></p>',
|
|
||||||
RedCloth.new('@[project]_dff.skjd@').to_html
|
|
||||||
end
|
|
||||||
# false positive, but expected behavior / known issue
|
|
||||||
assert_equal "<p><span lang=\"ruby\" class=\"CodeRay\">_dff.skjd</span></p>",
|
|
||||||
RedCloth.new('@[ruby]_dff.skjd@').to_html
|
|
||||||
assert_warning 'CodeRay::Scanners could not load plugin :project; falling back to :text' do
|
|
||||||
assert_equal <<-BLOCKCODE.chomp,
|
|
||||||
<pre><code>[project]_dff.skjd</code></pre>
|
|
||||||
BLOCKCODE
|
|
||||||
RedCloth.new('bc. [project]_dff.skjd').to_html
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
end if defined? RedCloth
|
|
|
@ -1,15 +0,0 @@
|
||||||
require 'test/unit'
|
|
||||||
|
|
||||||
$VERBOSE = $CODERAY_DEBUG = true
|
|
||||||
$:.unshift File.expand_path('../../../lib', __FILE__)
|
|
||||||
require 'coderay'
|
|
||||||
|
|
||||||
mydir = File.dirname(__FILE__)
|
|
||||||
suite = Dir[File.join(mydir, '*.rb')].
|
|
||||||
map { |tc| File.basename(tc).sub(/\.rb$/, '') } - %w'suite for_redcloth'
|
|
||||||
|
|
||||||
puts "Running basic CodeRay #{CodeRay::VERSION} tests: #{suite.join(', ')}"
|
|
||||||
|
|
||||||
for test_case in suite
|
|
||||||
load File.join(mydir, test_case + '.rb')
|
|
||||||
end
|
|
|
@ -1,11 +0,0 @@
|
||||||
require 'rubygems'
|
|
||||||
#require 'redgreen/autotest'
|
|
||||||
require 'autotest/timestamp'
|
|
||||||
|
|
||||||
Autotest.add_hook :initialize do |autotest|
|
|
||||||
%w{.git .hg .DS_Store ._* tmp log doc}.each do |exception|
|
|
||||||
autotest.add_exception(exception)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# vim: syntax=ruby
|
|
|
@ -1,2 +0,0 @@
|
||||||
--colour
|
|
||||||
--format documentation
|
|
|
@ -1,200 +0,0 @@
|
||||||
--- !ruby/object:Gem::Specification
|
|
||||||
name: net-ldap
|
|
||||||
version: !ruby/object:Gem::Version
|
|
||||||
hash: 19
|
|
||||||
prerelease:
|
|
||||||
segments:
|
|
||||||
- 0
|
|
||||||
- 2
|
|
||||||
- 2
|
|
||||||
version: 0.2.2
|
|
||||||
platform: ruby
|
|
||||||
authors:
|
|
||||||
- Francis Cianfrocca
|
|
||||||
- Emiel van de Laar
|
|
||||||
- Rory O'Connell
|
|
||||||
- Kaspar Schiess
|
|
||||||
- Austin Ziegler
|
|
||||||
autorequire:
|
|
||||||
bindir: bin
|
|
||||||
cert_chain: []
|
|
||||||
|
|
||||||
date: 2011-03-26 00:00:00 Z
|
|
||||||
dependencies:
|
|
||||||
- !ruby/object:Gem::Dependency
|
|
||||||
name: rubyforge
|
|
||||||
prerelease: false
|
|
||||||
requirement: &id001 !ruby/object:Gem::Requirement
|
|
||||||
none: false
|
|
||||||
requirements:
|
|
||||||
- - ">="
|
|
||||||
- !ruby/object:Gem::Version
|
|
||||||
hash: 7
|
|
||||||
segments:
|
|
||||||
- 2
|
|
||||||
- 0
|
|
||||||
- 4
|
|
||||||
version: 2.0.4
|
|
||||||
type: :development
|
|
||||||
version_requirements: *id001
|
|
||||||
- !ruby/object:Gem::Dependency
|
|
||||||
name: hoe-git
|
|
||||||
prerelease: false
|
|
||||||
requirement: &id002 !ruby/object:Gem::Requirement
|
|
||||||
none: false
|
|
||||||
requirements:
|
|
||||||
- - ~>
|
|
||||||
- !ruby/object:Gem::Version
|
|
||||||
hash: 1
|
|
||||||
segments:
|
|
||||||
- 1
|
|
||||||
version: "1"
|
|
||||||
type: :development
|
|
||||||
version_requirements: *id002
|
|
||||||
- !ruby/object:Gem::Dependency
|
|
||||||
name: hoe-gemspec
|
|
||||||
prerelease: false
|
|
||||||
requirement: &id003 !ruby/object:Gem::Requirement
|
|
||||||
none: false
|
|
||||||
requirements:
|
|
||||||
- - ~>
|
|
||||||
- !ruby/object:Gem::Version
|
|
||||||
hash: 1
|
|
||||||
segments:
|
|
||||||
- 1
|
|
||||||
version: "1"
|
|
||||||
type: :development
|
|
||||||
version_requirements: *id003
|
|
||||||
- !ruby/object:Gem::Dependency
|
|
||||||
name: metaid
|
|
||||||
prerelease: false
|
|
||||||
requirement: &id004 !ruby/object:Gem::Requirement
|
|
||||||
none: false
|
|
||||||
requirements:
|
|
||||||
- - ~>
|
|
||||||
- !ruby/object:Gem::Version
|
|
||||||
hash: 1
|
|
||||||
segments:
|
|
||||||
- 1
|
|
||||||
version: "1"
|
|
||||||
type: :development
|
|
||||||
version_requirements: *id004
|
|
||||||
- !ruby/object:Gem::Dependency
|
|
||||||
name: flexmock
|
|
||||||
prerelease: false
|
|
||||||
requirement: &id005 !ruby/object:Gem::Requirement
|
|
||||||
none: false
|
|
||||||
requirements:
|
|
||||||
- - ~>
|
|
||||||
- !ruby/object:Gem::Version
|
|
||||||
hash: 59
|
|
||||||
segments:
|
|
||||||
- 0
|
|
||||||
- 9
|
|
||||||
- 0
|
|
||||||
version: 0.9.0
|
|
||||||
type: :development
|
|
||||||
version_requirements: *id005
|
|
||||||
- !ruby/object:Gem::Dependency
|
|
||||||
name: rspec
|
|
||||||
prerelease: false
|
|
||||||
requirement: &id006 !ruby/object:Gem::Requirement
|
|
||||||
none: false
|
|
||||||
requirements:
|
|
||||||
- - ~>
|
|
||||||
- !ruby/object:Gem::Version
|
|
||||||
hash: 3
|
|
||||||
segments:
|
|
||||||
- 2
|
|
||||||
- 0
|
|
||||||
version: "2.0"
|
|
||||||
type: :development
|
|
||||||
version_requirements: *id006
|
|
||||||
- !ruby/object:Gem::Dependency
|
|
||||||
name: hoe
|
|
||||||
prerelease: false
|
|
||||||
requirement: &id007 !ruby/object:Gem::Requirement
|
|
||||||
none: false
|
|
||||||
requirements:
|
|
||||||
- - ">="
|
|
||||||
- !ruby/object:Gem::Version
|
|
||||||
hash: 41
|
|
||||||
segments:
|
|
||||||
- 2
|
|
||||||
- 9
|
|
||||||
- 1
|
|
||||||
version: 2.9.1
|
|
||||||
type: :development
|
|
||||||
version_requirements: *id007
|
|
||||||
description: "Net::LDAP for Ruby (also called net-ldap) implements client access for the\n\
|
|
||||||
Lightweight Directory Access Protocol (LDAP), an IETF standard protocol for\n\
|
|
||||||
accessing distributed directory services. Net::LDAP is written completely in\n\
|
|
||||||
Ruby with no external dependencies. It supports most LDAP client features and a\n\
|
|
||||||
subset of server features as well.\n\n\
|
|
||||||
Net::LDAP has been tested against modern popular LDAP servers including\n\
|
|
||||||
OpenLDAP and Active Directory. The current release is mostly compliant with\n\
|
|
||||||
earlier versions of the IETF LDAP RFCs (2251\xE2\x80\x932256, 2829\xE2\x80\x932830, 3377, and 3771).\n\
|
|
||||||
Our roadmap for Net::LDAP 1.0 is to gain full <em>client</em> compliance with\n\
|
|
||||||
the most recent LDAP RFCs (4510\xE2\x80\x934519, plus portions of 4520\xE2\x80\x934532)."
|
|
||||||
email:
|
|
||||||
- blackhedd@rubyforge.org
|
|
||||||
- gemiel@gmail.com
|
|
||||||
- rory.ocon@gmail.com
|
|
||||||
- kaspar.schiess@absurd.li
|
|
||||||
- austin@rubyforge.org
|
|
||||||
executables: []
|
|
||||||
|
|
||||||
extensions: []
|
|
||||||
|
|
||||||
extra_rdoc_files:
|
|
||||||
- Manifest.txt
|
|
||||||
- Contributors.rdoc
|
|
||||||
- Hacking.rdoc
|
|
||||||
- History.rdoc
|
|
||||||
- License.rdoc
|
|
||||||
- README.rdoc
|
|
||||||
files:
|
|
||||||
- Manifest.txt
|
|
||||||
- Contributors.rdoc
|
|
||||||
- Hacking.rdoc
|
|
||||||
- History.rdoc
|
|
||||||
- License.rdoc
|
|
||||||
- README.rdoc
|
|
||||||
homepage: http://net-ldap.rubyforge.org/
|
|
||||||
licenses: []
|
|
||||||
|
|
||||||
post_install_message:
|
|
||||||
rdoc_options:
|
|
||||||
- --main
|
|
||||||
- README.rdoc
|
|
||||||
require_paths:
|
|
||||||
- lib
|
|
||||||
required_ruby_version: !ruby/object:Gem::Requirement
|
|
||||||
none: false
|
|
||||||
requirements:
|
|
||||||
- - ">="
|
|
||||||
- !ruby/object:Gem::Version
|
|
||||||
hash: 57
|
|
||||||
segments:
|
|
||||||
- 1
|
|
||||||
- 8
|
|
||||||
- 7
|
|
||||||
version: 1.8.7
|
|
||||||
required_rubygems_version: !ruby/object:Gem::Requirement
|
|
||||||
none: false
|
|
||||||
requirements:
|
|
||||||
- - ">="
|
|
||||||
- !ruby/object:Gem::Version
|
|
||||||
hash: 3
|
|
||||||
segments:
|
|
||||||
- 0
|
|
||||||
version: "0"
|
|
||||||
requirements: []
|
|
||||||
|
|
||||||
rubyforge_project: net-ldap
|
|
||||||
rubygems_version: 1.7.2
|
|
||||||
signing_key:
|
|
||||||
specification_version: 3
|
|
||||||
summary: Net::LDAP for Ruby (also called net-ldap) implements client access for the Lightweight Directory Access Protocol (LDAP), an IETF standard protocol for accessing distributed directory services
|
|
||||||
test_files: []
|
|
||||||
|
|
|
@ -1,21 +0,0 @@
|
||||||
== Contributors
|
|
||||||
|
|
||||||
Net::LDAP was originally developed by:
|
|
||||||
|
|
||||||
* Francis Cianfrocca (garbagecat)
|
|
||||||
|
|
||||||
Contributions since:
|
|
||||||
|
|
||||||
* Emiel van de Laar (emiel)
|
|
||||||
* Rory O'Connell (roryo)
|
|
||||||
* Kaspar Schiess (kschiess)
|
|
||||||
* Austin Ziegler (halostatue)
|
|
||||||
* Dimitrij Denissenko (dim)
|
|
||||||
* James Hewitt (jamstah)
|
|
||||||
* Kouhei Sutou (kou)
|
|
||||||
* Lars Tobias Skjong-Børsting (larstobi)
|
|
||||||
* Rory O'Connell (roryo)
|
|
||||||
* Tony Headford (tonyheadford)
|
|
||||||
* Derek Harmel (derekharmel)
|
|
||||||
* Erik Hetzner (egh)
|
|
||||||
* nowhereman
|
|
|
@ -1,68 +0,0 @@
|
||||||
= Hacking on Net::LDAP
|
|
||||||
|
|
||||||
We welcome your contributions to Net::LDAP. We accept most contributions, but
|
|
||||||
there are ways to increase the chance of your patch being accepted quickly.
|
|
||||||
|
|
||||||
== Licensing
|
|
||||||
|
|
||||||
Net::LDAP 0.2 and later are be licensed under an MIT-style license; any
|
|
||||||
contributions after 2010-04-20 must be under this license to be accepted.
|
|
||||||
|
|
||||||
== Formatting
|
|
||||||
|
|
||||||
* Your patches should be formatted like the rest of Net::LDAP.
|
|
||||||
* We use a text wrap of 76–78 characters, especially for documentation
|
|
||||||
contents.
|
|
||||||
* Operators should have spaces around them.
|
|
||||||
* Method definitions should have parentheses around arguments (and no
|
|
||||||
parentheses if there are no arguments).
|
|
||||||
* Indentation should be kept as flat as possible; this may mean being more
|
|
||||||
explicit with constants.
|
|
||||||
|
|
||||||
|
|
||||||
We welcome your contributions to Net::LDAP. To increase the chances of your
|
|
||||||
patches being accepted, we recommend that you follow the guidelines below:
|
|
||||||
|
|
||||||
== Documentation
|
|
||||||
|
|
||||||
* Documentation: {net-ldap}[http://net-ldap.rubyforge.org/]
|
|
||||||
|
|
||||||
It is very important that, if you add new methods or objects, your code is
|
|
||||||
well-documented. The purpose of the changes should be clearly described so that
|
|
||||||
even if this is a feature we do not use, we can understand its purpose.
|
|
||||||
|
|
||||||
We also encourage documentation-only contributions that improve the
|
|
||||||
documentation of Net::LDAP.
|
|
||||||
|
|
||||||
We encourage you to provide a good summary of your as a modification to
|
|
||||||
+History.rdoc+, and if you're not yet named as a contributor, include a
|
|
||||||
modification to +Contributors.rdoc+ to add yourself.
|
|
||||||
|
|
||||||
== Tests
|
|
||||||
|
|
||||||
The Net::LDAP team uses RSpec for unit testing; all changes must have rspec
|
|
||||||
tests for any new or changed features.
|
|
||||||
|
|
||||||
Your changes should have been tested against at least one real LDAP server; the
|
|
||||||
current tests are not sufficient to find all possible bugs. It's unlikely that
|
|
||||||
they will ever be sufficient given the variations in LDAP server behaviour.
|
|
||||||
|
|
||||||
If you're introducing a new feature, it would be preferred for you to provide
|
|
||||||
us with a sample LDIF data file for importing into LDAP servers for testing.
|
|
||||||
|
|
||||||
== Development Dependencies
|
|
||||||
|
|
||||||
Net::LDAP uses several libraries during development, all of which can be
|
|
||||||
installed using RubyGems.
|
|
||||||
|
|
||||||
* *hoe*
|
|
||||||
* *hoe-git*
|
|
||||||
* *metaid*
|
|
||||||
* *rspec*
|
|
||||||
* *flexmock*
|
|
||||||
|
|
||||||
== Participation
|
|
||||||
|
|
||||||
* RubyForge: {net-ldap}[http://rubyforge.org/projects/net-ldap]
|
|
||||||
* GitHub: {ruby-ldap/ruby-net-ldap}[https://github.com/ruby-ldap/ruby-net-ldap/]
|
|
||||||
* Group: {ruby-ldap}[http://groups.google.com/group/ruby-ldap]
|
|
|
@ -1,172 +0,0 @@
|
||||||
=== Net::LDAP 0.2.2 / 2011-03-26
|
|
||||||
* Bug Fixes:
|
|
||||||
* Fixed the call to Net::LDAP.modify_ops from Net::LDAP#modify.
|
|
||||||
|
|
||||||
=== Net::LDAP 0.2.1 / 2011-03-23
|
|
||||||
* Bug Fixes:
|
|
||||||
* Net::LDAP.modify_ops was broken and is now fixed.
|
|
||||||
|
|
||||||
=== Net::LDAP 0.2 / 2011-03-22
|
|
||||||
* Major Enhancements:
|
|
||||||
* Net::LDAP::Filter changes:
|
|
||||||
* Filters can only be constructed using our custom constructors (eq, ge,
|
|
||||||
etc.). Cleaned up the code to reflect the private new.
|
|
||||||
* Fixed #to_ber to output a BER representation for :ne filters. Simplified
|
|
||||||
the BER construction for substring matching.
|
|
||||||
* Added Filter.join(left, right), Filter.intersect(left, right), and
|
|
||||||
Filter.negate(filter) to match Filter#&, Filter#|, and Filter#~@ to
|
|
||||||
prevent those operators from having problems with the private new.
|
|
||||||
* Added Filter.present and Filter.present? aliases for the method
|
|
||||||
previously only known as Filter.pres.
|
|
||||||
* Added Filter.escape to escape strings for use in filters, based on
|
|
||||||
rfc4515.
|
|
||||||
* Added Filter.equals, Filter.begins, Filter.ends and Filter.contains,
|
|
||||||
which automatically escape input for use in a filter string.
|
|
||||||
* Cleaned up Net::LDAP::Filter::FilterParser to handle branches better.
|
|
||||||
Fixed some of the regular expressions to be more canonically defined.
|
|
||||||
* Correctly handles single-branch branches.
|
|
||||||
* Cleaned up the string representation of Filter objects.
|
|
||||||
* Added experimental support for RFC4515 extensible matching (e.g.,
|
|
||||||
"(cn:caseExactMatch:=Fred Flintstone)"); provided by "nowhereman".
|
|
||||||
* Net::LDAP::DN class representing an automatically escaping/unescaping
|
|
||||||
distinguished name for LDAP queries.
|
|
||||||
* Minor Enhancements:
|
|
||||||
* SSL capabilities will be enabled or disabled based on whether we can load
|
|
||||||
OpenSSL successfully or not.
|
|
||||||
* Moved the core class extensions extensions from being in the Net::LDAP
|
|
||||||
hierarchy to the Net::BER hierarchy as most of the methods therein are
|
|
||||||
related to BER-encoding values. This will make extracting Net::BER from
|
|
||||||
Net::LDAP easier in the future.
|
|
||||||
* Added some unit tests for the BER core extensions.
|
|
||||||
* Paging controls are only sent where they are supported.
|
|
||||||
* Documentation Changes:
|
|
||||||
* Core class extension methods under Net::BER.
|
|
||||||
* Extensive changes to Net::BER documentation.
|
|
||||||
* Cleaned up some rdoc oddities, suppressed empty documentation sections
|
|
||||||
where possible.
|
|
||||||
* Added a document describing how to contribute to Net::LDAP most
|
|
||||||
effectively.
|
|
||||||
* Added a document recognizing contributors to Net::LDAP.
|
|
||||||
* Extended unit testing:
|
|
||||||
* Added some unit tests for the BER core extensions.
|
|
||||||
* The LDIF test data file was split for Ruby 1.9 regexp support.
|
|
||||||
* Added a cruisecontrol.rb task.
|
|
||||||
* Converted some test/unit tests to specs.
|
|
||||||
* Code clean-up:
|
|
||||||
* Made the formatting of code consistent across all files.
|
|
||||||
* Removed Net::BER::BERParser::TagClasses as it does not appear to be used.
|
|
||||||
* Replaced calls to #to_a with calls to Kernel#Array; since Ruby 1.8.3, the
|
|
||||||
default #to_a implementation has been deprecated and should be replaced
|
|
||||||
either with calls to Kernel#Array or [value].flatten(1).
|
|
||||||
* Modified #add and #modify to return a Pdu#result_code instead of a
|
|
||||||
Pdu#result. This may be changed in Net::LDAP 1.0 to return the full
|
|
||||||
Pdu#result, but if we do so, it will be that way for all LDAP calls
|
|
||||||
involving Pdu objects.
|
|
||||||
* Renamed Net::LDAP::Psw to Net::LDAP::Password with a corresponding filename
|
|
||||||
change.
|
|
||||||
* Removed the stub file lib/net/ldif.rb and class Net::LDIF.
|
|
||||||
* Project Management:
|
|
||||||
* Changed the license from Ruby + GPL to MIT with the agreement of the
|
|
||||||
original author (Francis Cianfrocca) and the named contributors. Versions
|
|
||||||
prior to 0.2.0 are still available under the Ruby + GPL license.
|
|
||||||
|
|
||||||
=== Net::LDAP 0.1.1 / 2010-03-18
|
|
||||||
* Fixing a critical problem with sockets.
|
|
||||||
|
|
||||||
=== Net::LDAP 0.1 / 2010-03-17
|
|
||||||
* Small fixes throughout, more to come.
|
|
||||||
* Ruby 1.9 support added.
|
|
||||||
* Ruby 1.8.6 and below support removed. If we can figure out a compatible way
|
|
||||||
to reintroduce this, we will.
|
|
||||||
* New maintainers, new project repository location. Please see the README.txt.
|
|
||||||
|
|
||||||
=== Net::LDAP 0.0.5 / 2009-03-xx
|
|
||||||
* 13 minor enhancements:
|
|
||||||
* Added Net::LDAP::Entry#to_ldif
|
|
||||||
* Supported rootDSE searches with a new API.
|
|
||||||
* Added [preliminary (still undocumented) support for SASL authentication.
|
|
||||||
* Supported several constructs from the server side of the LDAP protocol.
|
|
||||||
* Added a "consuming" String#read_ber! method.
|
|
||||||
* Added some support for SNMP data-handling.
|
|
||||||
* Belatedly added a patch contributed by Kouhei Sutou last October.
|
|
||||||
The patch adds start_tls support.
|
|
||||||
* Added Net::LDAP#search_subschema_entry
|
|
||||||
* Added Net::LDAP::Filter#parse_ber, which constructs Net::LDAP::Filter
|
|
||||||
objects directly from BER objects that represent search filters in
|
|
||||||
LDAP SearchRequest packets.
|
|
||||||
* Added Net::LDAP::Filter#execute, which allows arbitrary processing
|
|
||||||
based on LDAP filters.
|
|
||||||
* Changed Net::LDAP::Entry so it can be marshalled and unmarshalled.
|
|
||||||
Thanks to an anonymous feature requester who only left the name
|
|
||||||
"Jammy."
|
|
||||||
* Added support for binary values in Net::LDAP::Entry LDIF conversions
|
|
||||||
and marshalling.
|
|
||||||
* Migrated to 'hoe' as the new project droid.
|
|
||||||
* 14 bugs fixed:
|
|
||||||
* Silenced some annoying warnings in filter.rb. Thanks to "barjunk"
|
|
||||||
for pointing this out.
|
|
||||||
* Some fairly extensive performance optimizations in the BER parser.
|
|
||||||
* Fixed a bug in Net::LDAP::Entry::from_single_ldif_string noticed by
|
|
||||||
Matthias Tarasiewicz.
|
|
||||||
* Removed an erroneous LdapError value, noticed by Kouhei Sutou.
|
|
||||||
* Supported attributes containing blanks (cn=Babs Jensen) to
|
|
||||||
Filter#construct. Suggested by an anonymous Rubyforge user.
|
|
||||||
* Added missing syntactic support for Filter ANDs, NOTs and a few other
|
|
||||||
things.
|
|
||||||
* Extended support for server-reported error messages. This was provisionally
|
|
||||||
added to Net::LDAP#add, and eventually will be added to other methods.
|
|
||||||
* Fixed bug in Net::LDAP#bind. We were ignoring the passed-in auth parm.
|
|
||||||
Thanks to Kouhei Sutou for spotting it.
|
|
||||||
* Patched filter syntax to support octal \XX codes. Thanks to Kouhei Sutou
|
|
||||||
for the patch.
|
|
||||||
* Applied an additional patch from Kouhei.
|
|
||||||
* Allowed comma in filter strings, suggested by Kouhei.
|
|
||||||
* 04Sep07, Changed four error classes to inherit from StandardError rather
|
|
||||||
Exception, in order to be friendlier to irb. Suggested by Kouhei.
|
|
||||||
* Ensure connections are closed. Thanks to Kristian Meier.
|
|
||||||
* Minor bug fixes here and there.
|
|
||||||
|
|
||||||
=== Net::LDAP 0.0.4 / 2006-08-15
|
|
||||||
* Undeprecated Net::LDAP#modify. Thanks to Justin Forder for
|
|
||||||
providing the rationale for this.
|
|
||||||
* Added a much-expanded set of special characters to the parser
|
|
||||||
for RFC-2254 filters. Thanks to Andre Nathan.
|
|
||||||
* Changed Net::LDAP#search so you can pass it a filter in string form.
|
|
||||||
The conversion to a Net::LDAP::Filter now happens automatically.
|
|
||||||
* Implemented Net::LDAP#bind_as (preliminary and subject to change).
|
|
||||||
Thanks for Simon Claret for valuable suggestions and for helping test.
|
|
||||||
* Fixed bug in Net::LDAP#open that was preventing #open from being
|
|
||||||
called more than one on a given Net::LDAP object.
|
|
||||||
|
|
||||||
=== Net::LDAP 0.0.3 / 2006-07-26
|
|
||||||
* Added simple TLS encryption.
|
|
||||||
Thanks to Garett Shulman for suggestions and for helping test.
|
|
||||||
|
|
||||||
=== Net::LDAP 0.0.2 / 2006-07-12
|
|
||||||
* Fixed malformation in distro tarball and gem.
|
|
||||||
* Improved documentation.
|
|
||||||
* Supported "paged search control."
|
|
||||||
* Added a range of API improvements.
|
|
||||||
* Thanks to Andre Nathan, andre@digirati.com.br, for valuable
|
|
||||||
suggestions.
|
|
||||||
* Added support for LE and GE search filters.
|
|
||||||
* Added support for Search referrals.
|
|
||||||
* Fixed a regression with openldap 2.2.x and higher caused
|
|
||||||
by the introduction of RFC-2696 controls. Thanks to Andre
|
|
||||||
Nathan for reporting the problem.
|
|
||||||
* Added support for RFC-2254 filter syntax.
|
|
||||||
|
|
||||||
=== Net::LDAP 0.0.1 / 2006-05-01
|
|
||||||
* Initial release.
|
|
||||||
* Client functionality is near-complete, although the APIs
|
|
||||||
are not guaranteed and may change depending on feedback
|
|
||||||
from the community.
|
|
||||||
* We're internally working on a Ruby-based implementation
|
|
||||||
of a full-featured, production-quality LDAP server,
|
|
||||||
which will leverage the underlying LDAP and BER functionality
|
|
||||||
in Net::LDAP.
|
|
||||||
* Please tell us if you would be interested in seeing a public
|
|
||||||
release of the LDAP server.
|
|
||||||
* Grateful acknowledgement to Austin Ziegler, who reviewed
|
|
||||||
this code and provided the release framework, including
|
|
||||||
minitar.
|
|
|
@ -1,29 +0,0 @@
|
||||||
== License
|
|
||||||
|
|
||||||
This software is available under the terms of the MIT license.
|
|
||||||
|
|
||||||
Copyright 2006–2011 by Francis Cianfrocca and other contributors.
|
|
||||||
|
|
||||||
Permission is hereby granted, free of charge, to any person obtaining
|
|
||||||
a copy of this software and associated documentation files (the
|
|
||||||
"Software"), to deal in the Software without restriction, including
|
|
||||||
without limitation the rights to use, copy, modify, merge, publish,
|
|
||||||
distribute, sublicense, and/or sell copies of the Software, and to
|
|
||||||
permit persons to whom the Software is furnished to do so, subject to
|
|
||||||
the following conditions:
|
|
||||||
|
|
||||||
The above copyright notice and this permission notice shall be
|
|
||||||
included in all copies or substantial portions of the Software.
|
|
||||||
|
|
||||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
|
||||||
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
|
||||||
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
|
||||||
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
|
||||||
|
|
||||||
=== Notice of License Change
|
|
||||||
|
|
||||||
Versions prior to 0.2 were under Ruby's dual license with the GNU GPL. With
|
|
||||||
this release (0.2), Net::LDAP is now under the MIT license.
|
|
|
@ -1,49 +0,0 @@
|
||||||
.autotest
|
|
||||||
.rspec
|
|
||||||
Contributors.rdoc
|
|
||||||
Hacking.rdoc
|
|
||||||
History.rdoc
|
|
||||||
License.rdoc
|
|
||||||
Manifest.txt
|
|
||||||
README.rdoc
|
|
||||||
Rakefile
|
|
||||||
autotest/discover.rb
|
|
||||||
lib/net-ldap.rb
|
|
||||||
lib/net/ber.rb
|
|
||||||
lib/net/ber/ber_parser.rb
|
|
||||||
lib/net/ber/core_ext.rb
|
|
||||||
lib/net/ber/core_ext/array.rb
|
|
||||||
lib/net/ber/core_ext/bignum.rb
|
|
||||||
lib/net/ber/core_ext/false_class.rb
|
|
||||||
lib/net/ber/core_ext/fixnum.rb
|
|
||||||
lib/net/ber/core_ext/string.rb
|
|
||||||
lib/net/ber/core_ext/true_class.rb
|
|
||||||
lib/net/ldap.rb
|
|
||||||
lib/net/ldap/dataset.rb
|
|
||||||
lib/net/ldap/dn.rb
|
|
||||||
lib/net/ldap/entry.rb
|
|
||||||
lib/net/ldap/filter.rb
|
|
||||||
lib/net/ldap/password.rb
|
|
||||||
lib/net/ldap/pdu.rb
|
|
||||||
lib/net/snmp.rb
|
|
||||||
net-ldap.gemspec
|
|
||||||
spec/integration/ssl_ber_spec.rb
|
|
||||||
spec/spec.opts
|
|
||||||
spec/spec_helper.rb
|
|
||||||
spec/unit/ber/ber_spec.rb
|
|
||||||
spec/unit/ber/core_ext/string_spec.rb
|
|
||||||
spec/unit/ldap/dn_spec.rb
|
|
||||||
spec/unit/ldap/entry_spec.rb
|
|
||||||
spec/unit/ldap/filter_spec.rb
|
|
||||||
spec/unit/ldap_spec.rb
|
|
||||||
test/common.rb
|
|
||||||
test/test_entry.rb
|
|
||||||
test/test_filter.rb
|
|
||||||
test/test_ldap_connection.rb
|
|
||||||
test/test_ldif.rb
|
|
||||||
test/test_password.rb
|
|
||||||
test/test_rename.rb
|
|
||||||
test/test_snmp.rb
|
|
||||||
test/testdata.ldif
|
|
||||||
testserver/ldapserver.rb
|
|
||||||
testserver/testdata.ldif
|
|
|
@ -1,52 +0,0 @@
|
||||||
= Net::LDAP for Ruby
|
|
||||||
|
|
||||||
== Description
|
|
||||||
|
|
||||||
Net::LDAP for Ruby (also called net-ldap) implements client access for the
|
|
||||||
Lightweight Directory Access Protocol (LDAP), an IETF standard protocol for
|
|
||||||
accessing distributed directory services. Net::LDAP is written completely in
|
|
||||||
Ruby with no external dependencies. It supports most LDAP client features and a
|
|
||||||
subset of server features as well.
|
|
||||||
|
|
||||||
Net::LDAP has been tested against modern popular LDAP servers including
|
|
||||||
OpenLDAP and Active Directory. The current release is mostly compliant with
|
|
||||||
earlier versions of the IETF LDAP RFCs (2251–2256, 2829–2830, 3377, and 3771).
|
|
||||||
Our roadmap for Net::LDAP 1.0 is to gain full <em>client</em> compliance with
|
|
||||||
the most recent LDAP RFCs (4510–4519, plus portions of 4520–4532).
|
|
||||||
|
|
||||||
== Where
|
|
||||||
|
|
||||||
* {RubyForge}[http://rubyforge.org/projects/net-ldap]
|
|
||||||
* {GitHub}[https://github.com/ruby-ldap/ruby-net-ldap]
|
|
||||||
* {ruby-ldap@googlegroups.com}[http://groups.google.com/group/ruby-ldap]
|
|
||||||
* {Documentation}[http://net-ldap.rubyforge.org/]
|
|
||||||
|
|
||||||
The Net::LDAP for Ruby documentation, project description, and main downloads
|
|
||||||
can currently be found on {RubyForge}[http://rubyforge.org/projects/net-ldap].
|
|
||||||
|
|
||||||
== Synopsis
|
|
||||||
|
|
||||||
See Net::LDAP for documentation and usage samples.
|
|
||||||
|
|
||||||
== Requirements
|
|
||||||
|
|
||||||
Net::LDAP requires a Ruby 1.8.7 interpreter or better.
|
|
||||||
|
|
||||||
== Install
|
|
||||||
|
|
||||||
Net::LDAP is a pure Ruby library. It does not require any external libraries.
|
|
||||||
You can install the RubyGems version of Net::LDAP available from the usual
|
|
||||||
sources.
|
|
||||||
|
|
||||||
gem install net-ldap
|
|
||||||
|
|
||||||
Simply require either 'net-ldap' or 'net/ldap'.
|
|
||||||
|
|
||||||
For non-RubyGems installations of Net::LDAP, you can use Minero Aoki's
|
|
||||||
{setup.rb}[http://i.loveruby.net/en/projects/setup/] as the layout of
|
|
||||||
Net::LDAP is compliant. The setup installer is not included in the
|
|
||||||
Net::LDAP repository.
|
|
||||||
|
|
||||||
:include: Contributors.rdoc
|
|
||||||
|
|
||||||
:include: License.rdoc
|
|
|
@ -1,75 +0,0 @@
|
||||||
# -*- ruby encoding: utf-8 -*-
|
|
||||||
|
|
||||||
require "rubygems"
|
|
||||||
require 'hoe'
|
|
||||||
|
|
||||||
Hoe.plugin :doofus
|
|
||||||
Hoe.plugin :git
|
|
||||||
Hoe.plugin :gemspec
|
|
||||||
Hoe.plugin :rubyforge
|
|
||||||
|
|
||||||
Hoe.spec 'net-ldap' do |spec|
|
|
||||||
spec.rubyforge_name = spec.name
|
|
||||||
|
|
||||||
spec.developer("Francis Cianfrocca", "blackhedd@rubyforge.org")
|
|
||||||
spec.developer("Emiel van de Laar", "gemiel@gmail.com")
|
|
||||||
spec.developer("Rory O'Connell", "rory.ocon@gmail.com")
|
|
||||||
spec.developer("Kaspar Schiess", "kaspar.schiess@absurd.li")
|
|
||||||
spec.developer("Austin Ziegler", "austin@rubyforge.org")
|
|
||||||
|
|
||||||
spec.remote_rdoc_dir = ''
|
|
||||||
spec.rsync_args << ' --exclude=statsvn/'
|
|
||||||
|
|
||||||
spec.url = %W(http://net-ldap.rubyforge.org/ https://github.com/ruby-ldap/ruby-net-ldap)
|
|
||||||
|
|
||||||
spec.history_file = 'History.rdoc'
|
|
||||||
spec.readme_file = 'README.rdoc'
|
|
||||||
|
|
||||||
spec.extra_rdoc_files = FileList["*.rdoc"].to_a
|
|
||||||
|
|
||||||
spec.extra_dev_deps << [ "hoe-git", "~> 1" ]
|
|
||||||
spec.extra_dev_deps << [ "hoe-gemspec", "~> 1" ]
|
|
||||||
spec.extra_dev_deps << [ "metaid", "~> 1" ]
|
|
||||||
spec.extra_dev_deps << [ "flexmock", "~> 0.9.0" ]
|
|
||||||
spec.extra_dev_deps << [ "rspec", "~> 2.0" ]
|
|
||||||
|
|
||||||
spec.clean_globs << "coverage"
|
|
||||||
|
|
||||||
spec.spec_extras[:required_ruby_version] = ">= 1.8.7"
|
|
||||||
spec.multiruby_skip << "1.8.6"
|
|
||||||
spec.multiruby_skip << "1_8_6"
|
|
||||||
|
|
||||||
spec.need_tar = true
|
|
||||||
end
|
|
||||||
|
|
||||||
# I'm not quite ready to get rid of this, but I think "rake git:manifest" is
|
|
||||||
# sufficient.
|
|
||||||
namespace :old do
|
|
||||||
desc "Build the manifest file from the current set of files."
|
|
||||||
task :build_manifest do |t|
|
|
||||||
require 'find'
|
|
||||||
|
|
||||||
paths = []
|
|
||||||
Find.find(".") do |path|
|
|
||||||
next if File.directory?(path)
|
|
||||||
next if path =~ /\.svn/
|
|
||||||
next if path =~ /\.git/
|
|
||||||
next if path =~ /\.hoerc/
|
|
||||||
next if path =~ /\.swp$/
|
|
||||||
next if path =~ %r{coverage/}
|
|
||||||
next if path =~ /~$/
|
|
||||||
paths << path.sub(%r{^\./}, '')
|
|
||||||
end
|
|
||||||
|
|
||||||
File.open("Manifest.txt", "w") do |f|
|
|
||||||
f.puts paths.sort.join("\n")
|
|
||||||
end
|
|
||||||
|
|
||||||
puts paths.sort.join("\n")
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
desc "Run a full set of integration and unit tests"
|
|
||||||
task :cruise => [:test, :spec]
|
|
||||||
|
|
||||||
# vim: syntax=ruby
|
|
|
@ -1 +0,0 @@
|
||||||
Autotest.add_discovery { "rspec2" }
|
|
|
@ -1,2 +0,0 @@
|
||||||
# -*- ruby encoding: utf-8 -*-
|
|
||||||
require 'net/ldap'
|
|
|
@ -1,316 +0,0 @@
|
||||||
# -*- ruby encoding: utf-8 -*-
|
|
||||||
module Net # :nodoc:
|
|
||||||
##
|
|
||||||
# == Basic Encoding Rules (BER) Support Module
|
|
||||||
#
|
|
||||||
# Much of the text below is cribbed from Wikipedia:
|
|
||||||
# http://en.wikipedia.org/wiki/Basic_Encoding_Rules
|
|
||||||
#
|
|
||||||
# The ITU Specification is also worthwhile reading:
|
|
||||||
# http://www.itu.int/ITU-T/studygroups/com17/languages/X.690-0207.pdf
|
|
||||||
#
|
|
||||||
# The Basic Encoding Rules were the original rules laid out by the ASN.1
|
|
||||||
# standard for encoding abstract information into a concrete data stream.
|
|
||||||
# The rules, collectively referred to as a transfer syntax in ASN.1
|
|
||||||
# parlance, specify the exact octet sequences which are used to encode a
|
|
||||||
# given data item. The syntax defines such elements as: the
|
|
||||||
# representations for basic data types, the structure of length
|
|
||||||
# information, and the means for defining complex or compound types based
|
|
||||||
# on more primitive types. The BER syntax, along with two subsets of BER
|
|
||||||
# (the Canonical Encoding Rules and the Distinguished Encoding Rules), are
|
|
||||||
# defined by the ITU-T's X.690 standards document, which is part of the
|
|
||||||
# ASN.1 document series.
|
|
||||||
#
|
|
||||||
# == Encoding
|
|
||||||
# The BER format specifies a self-describing and self-delimiting format
|
|
||||||
# for encoding ASN.1 data structures. Each data element is encoded as a
|
|
||||||
# type identifier, a length description, the actual data elements, and
|
|
||||||
# where necessary, an end-of-content marker. This format allows a receiver
|
|
||||||
# to decode the ASN.1 information from an incomplete stream, without
|
|
||||||
# requiring any pre-knowledge of the size, content, or semantic meaning of
|
|
||||||
# the data.
|
|
||||||
#
|
|
||||||
# <Type | Length | Value [| End-of-Content]>
|
|
||||||
#
|
|
||||||
# == Protocol Data Units (PDU)
|
|
||||||
# Protocols are defined with schema represented in BER, such that a PDU
|
|
||||||
# consists of cascaded type-length-value encodings.
|
|
||||||
#
|
|
||||||
# === Type Tags
|
|
||||||
# BER type tags are represented as single octets (bytes). The lower five
|
|
||||||
# bits of the octet are tag identifier numbers and the upper three bits of
|
|
||||||
# the octet are used to distinguish the type as native to ASN.1,
|
|
||||||
# application-specific, context-specific, or private. See
|
|
||||||
# Net::BER::TAG_CLASS and Net::BER::ENCODING_TYPE for more information.
|
|
||||||
#
|
|
||||||
# If Class is set to Universal (0b00______), the value is of a type native
|
|
||||||
# to ASN.1 (e.g. INTEGER). The Application class (0b01______) is only
|
|
||||||
# valid for one specific application. Context_specific (0b10______)
|
|
||||||
# depends on the context and private (0b11_______) can be defined in
|
|
||||||
# private specifications
|
|
||||||
#
|
|
||||||
# If the primitive/constructed bit is zero (0b__0_____), it specifies that
|
|
||||||
# the value is primitive like an INTEGER. If it is one (0b__1_____), the
|
|
||||||
# value is a constructed value that contains type-length-value encoded
|
|
||||||
# types like a SET or a SEQUENCE.
|
|
||||||
#
|
|
||||||
# === Defined Universal (ASN.1 Native) Types
|
|
||||||
# There are a number of pre-defined universal (native) types.
|
|
||||||
#
|
|
||||||
# <table>
|
|
||||||
# <tr><th>Name</th><th>Primitive<br />Constructed</th><th>Number</th></tr>
|
|
||||||
# <tr><th>EOC (End-of-Content)</th><th>P</th><td>0: 0 (0x0, 0b00000000)</td></tr>
|
|
||||||
# <tr><th>BOOLEAN</th><th>P</th><td>1: 1 (0x01, 0b00000001)</td></tr>
|
|
||||||
# <tr><th>INTEGER</th><th>P</th><td>2: 2 (0x02, 0b00000010)</td></tr>
|
|
||||||
# <tr><th>BIT STRING</th><th>P</th><td>3: 3 (0x03, 0b00000011)</td></tr>
|
|
||||||
# <tr><th>BIT STRING</th><th>C</th><td>3: 35 (0x23, 0b00100011)</td></tr>
|
|
||||||
# <tr><th>OCTET STRING</th><th>P</th><td>4: 4 (0x04, 0b00000100)</td></tr>
|
|
||||||
# <tr><th>OCTET STRING</th><th>C</th><td>4: 36 (0x24, 0b00100100)</td></tr>
|
|
||||||
# <tr><th>NULL</th><th>P</th><td>5: 5 (0x05, 0b00000101)</td></tr>
|
|
||||||
# <tr><th>OBJECT IDENTIFIER</th><th>P</th><td>6: 6 (0x06, 0b00000110)</td></tr>
|
|
||||||
# <tr><th>Object Descriptor</th><th>P</th><td>7: 7 (0x07, 0b00000111)</td></tr>
|
|
||||||
# <tr><th>EXTERNAL</th><th>C</th><td>8: 40 (0x28, 0b00101000)</td></tr>
|
|
||||||
# <tr><th>REAL (float)</th><th>P</th><td>9: 9 (0x09, 0b00001001)</td></tr>
|
|
||||||
# <tr><th>ENUMERATED</th><th>P</th><td>10: 10 (0x0a, 0b00001010)</td></tr>
|
|
||||||
# <tr><th>EMBEDDED PDV</th><th>C</th><td>11: 43 (0x2b, 0b00101011)</td></tr>
|
|
||||||
# <tr><th>UTF8String</th><th>P</th><td>12: 12 (0x0c, 0b00001100)</td></tr>
|
|
||||||
# <tr><th>UTF8String</th><th>C</th><td>12: 44 (0x2c, 0b00101100)</td></tr>
|
|
||||||
# <tr><th>RELATIVE-OID</th><th>P</th><td>13: 13 (0x0d, 0b00001101)</td></tr>
|
|
||||||
# <tr><th>SEQUENCE and SEQUENCE OF</th><th>C</th><td>16: 48 (0x30, 0b00110000)</td></tr>
|
|
||||||
# <tr><th>SET and SET OF</th><th>C</th><td>17: 49 (0x31, 0b00110001)</td></tr>
|
|
||||||
# <tr><th>NumericString</th><th>P</th><td>18: 18 (0x12, 0b00010010)</td></tr>
|
|
||||||
# <tr><th>NumericString</th><th>C</th><td>18: 50 (0x32, 0b00110010)</td></tr>
|
|
||||||
# <tr><th>PrintableString</th><th>P</th><td>19: 19 (0x13, 0b00010011)</td></tr>
|
|
||||||
# <tr><th>PrintableString</th><th>C</th><td>19: 51 (0x33, 0b00110011)</td></tr>
|
|
||||||
# <tr><th>T61String</th><th>P</th><td>20: 20 (0x14, 0b00010100)</td></tr>
|
|
||||||
# <tr><th>T61String</th><th>C</th><td>20: 52 (0x34, 0b00110100)</td></tr>
|
|
||||||
# <tr><th>VideotexString</th><th>P</th><td>21: 21 (0x15, 0b00010101)</td></tr>
|
|
||||||
# <tr><th>VideotexString</th><th>C</th><td>21: 53 (0x35, 0b00110101)</td></tr>
|
|
||||||
# <tr><th>IA5String</th><th>P</th><td>22: 22 (0x16, 0b00010110)</td></tr>
|
|
||||||
# <tr><th>IA5String</th><th>C</th><td>22: 54 (0x36, 0b00110110)</td></tr>
|
|
||||||
# <tr><th>UTCTime</th><th>P</th><td>23: 23 (0x17, 0b00010111)</td></tr>
|
|
||||||
# <tr><th>UTCTime</th><th>C</th><td>23: 55 (0x37, 0b00110111)</td></tr>
|
|
||||||
# <tr><th>GeneralizedTime</th><th>P</th><td>24: 24 (0x18, 0b00011000)</td></tr>
|
|
||||||
# <tr><th>GeneralizedTime</th><th>C</th><td>24: 56 (0x38, 0b00111000)</td></tr>
|
|
||||||
# <tr><th>GraphicString</th><th>P</th><td>25: 25 (0x19, 0b00011001)</td></tr>
|
|
||||||
# <tr><th>GraphicString</th><th>C</th><td>25: 57 (0x39, 0b00111001)</td></tr>
|
|
||||||
# <tr><th>VisibleString</th><th>P</th><td>26: 26 (0x1a, 0b00011010)</td></tr>
|
|
||||||
# <tr><th>VisibleString</th><th>C</th><td>26: 58 (0x3a, 0b00111010)</td></tr>
|
|
||||||
# <tr><th>GeneralString</th><th>P</th><td>27: 27 (0x1b, 0b00011011)</td></tr>
|
|
||||||
# <tr><th>GeneralString</th><th>C</th><td>27: 59 (0x3b, 0b00111011)</td></tr>
|
|
||||||
# <tr><th>UniversalString</th><th>P</th><td>28: 28 (0x1c, 0b00011100)</td></tr>
|
|
||||||
# <tr><th>UniversalString</th><th>C</th><td>28: 60 (0x3c, 0b00111100)</td></tr>
|
|
||||||
# <tr><th>CHARACTER STRING</th><th>P</th><td>29: 29 (0x1d, 0b00011101)</td></tr>
|
|
||||||
# <tr><th>CHARACTER STRING</th><th>C</th><td>29: 61 (0x3d, 0b00111101)</td></tr>
|
|
||||||
# <tr><th>BMPString</th><th>P</th><td>30: 30 (0x1e, 0b00011110)</td></tr>
|
|
||||||
# <tr><th>BMPString</th><th>C</th><td>30: 62 (0x3e, 0b00111110)</td></tr>
|
|
||||||
# </table>
|
|
||||||
module BER
|
|
||||||
VERSION = '0.2.2'
|
|
||||||
|
|
||||||
##
|
|
||||||
# Used for BER-encoding the length and content bytes of a Fixnum integer
|
|
||||||
# values.
|
|
||||||
MAX_FIXNUM_SIZE = 0.size
|
|
||||||
|
|
||||||
##
|
|
||||||
# BER tag classes are kept in bits seven and eight of the tag type
|
|
||||||
# octet.
|
|
||||||
#
|
|
||||||
# <table>
|
|
||||||
# <tr><th>Bitmask</th><th>Definition</th></tr>
|
|
||||||
# <tr><th><tt>0b00______</tt></th><td>Universal (ASN.1 Native) Types</td></tr>
|
|
||||||
# <tr><th><tt>0b01______</tt></th><td>Application Types</td></tr>
|
|
||||||
# <tr><th><tt>0b10______</tt></th><td>Context-Specific Types</td></tr>
|
|
||||||
# <tr><th><tt>0b11______</tt></th><td>Private Types</td></tr>
|
|
||||||
# </table>
|
|
||||||
TAG_CLASS = {
|
|
||||||
:universal => 0b00000000, # 0
|
|
||||||
:application => 0b01000000, # 64
|
|
||||||
:context_specific => 0b10000000, # 128
|
|
||||||
:private => 0b11000000, # 192
|
|
||||||
}
|
|
||||||
|
|
||||||
##
|
|
||||||
# BER encoding type is kept in bit 6 of the tag type octet.
|
|
||||||
#
|
|
||||||
# <table>
|
|
||||||
# <tr><th>Bitmask</th><th>Definition</th></tr>
|
|
||||||
# <tr><th><tt>0b__0_____</tt></th><td>Primitive</td></tr>
|
|
||||||
# <tr><th><tt>0b__1_____</tt></th><td>Constructed</td></tr>
|
|
||||||
# </table>
|
|
||||||
ENCODING_TYPE = {
|
|
||||||
:primitive => 0b00000000, # 0
|
|
||||||
:constructed => 0b00100000, # 32
|
|
||||||
}
|
|
||||||
|
|
||||||
##
|
|
||||||
# Accepts a hash of hashes describing a BER syntax and converts it into
|
|
||||||
# a byte-keyed object for fast BER conversion lookup. The resulting
|
|
||||||
# "compiled" syntax is used by Net::BER::BERParser.
|
|
||||||
#
|
|
||||||
# This method should be called only by client classes of Net::BER (e.g.,
|
|
||||||
# Net::LDAP and Net::SNMP) and not by clients of those classes.
|
|
||||||
#
|
|
||||||
# The hash-based syntax uses TAG_CLASS keys that contain hashes of
|
|
||||||
# ENCODING_TYPE keys that contain tag numbers with object type markers.
|
|
||||||
#
|
|
||||||
# :<TAG_CLASS> => {
|
|
||||||
# :<ENCODING_TYPE> => {
|
|
||||||
# <number> => <object-type>
|
|
||||||
# },
|
|
||||||
# },
|
|
||||||
#
|
|
||||||
# === Permitted Object Types
|
|
||||||
# <tt>:string</tt>:: A string value, represented as BerIdentifiedString.
|
|
||||||
# <tt>:integer</tt>:: An integer value, represented with Fixnum.
|
|
||||||
# <tt>:oid</tt>:: An Object Identifier value; see X.690 section
|
|
||||||
# 8.19. Currently represented with a standard array,
|
|
||||||
# but may be better represented as a
|
|
||||||
# BerIdentifiedOID object.
|
|
||||||
# <tt>:array</tt>:: A sequence, represented as BerIdentifiedArray.
|
|
||||||
# <tt>:boolean</tt>:: A boolean value, represented as +true+ or +false+.
|
|
||||||
# <tt>:null</tt>:: A null value, represented as BerIdentifiedNull.
|
|
||||||
#
|
|
||||||
# === Example
|
|
||||||
# Net::LDAP defines its ASN.1 BER syntax something like this:
|
|
||||||
#
|
|
||||||
# class Net::LDAP
|
|
||||||
# AsnSyntax = Net::BER.compile_syntax({
|
|
||||||
# :application => {
|
|
||||||
# :primitive => {
|
|
||||||
# 2 => :null,
|
|
||||||
# },
|
|
||||||
# :constructed => {
|
|
||||||
# 0 => :array,
|
|
||||||
# # ...
|
|
||||||
# },
|
|
||||||
# },
|
|
||||||
# :context_specific => {
|
|
||||||
# :primitive => {
|
|
||||||
# 0 => :string,
|
|
||||||
# # ...
|
|
||||||
# },
|
|
||||||
# :constructed => {
|
|
||||||
# 0 => :array,
|
|
||||||
# # ...
|
|
||||||
# },
|
|
||||||
# }
|
|
||||||
# })
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# NOTE:: For readability and formatting purposes, Net::LDAP and its
|
|
||||||
# siblings actually construct their syntaxes more deliberately,
|
|
||||||
# as shown below. Since a hash is passed in the end in any case,
|
|
||||||
# the format does not matter.
|
|
||||||
#
|
|
||||||
# primitive = { 2 => :null }
|
|
||||||
# constructed = {
|
|
||||||
# 0 => :array,
|
|
||||||
# # ...
|
|
||||||
# }
|
|
||||||
# application = {
|
|
||||||
# :primitive => primitive,
|
|
||||||
# :constructed => constructed
|
|
||||||
# }
|
|
||||||
#
|
|
||||||
# primitive = {
|
|
||||||
# 0 => :string,
|
|
||||||
# # ...
|
|
||||||
# }
|
|
||||||
# constructed = {
|
|
||||||
# 0 => :array,
|
|
||||||
# # ...
|
|
||||||
# }
|
|
||||||
# context_specific = {
|
|
||||||
# :primitive => primitive,
|
|
||||||
# :constructed => constructed
|
|
||||||
# }
|
|
||||||
# AsnSyntax = Net::BER.compile_syntax(:application => application,
|
|
||||||
# :context_specific => context_specific)
|
|
||||||
def self.compile_syntax(syntax)
|
|
||||||
# TODO 20100327 AZ: Should we be allocating an array of 256 values
|
|
||||||
# that will either be +nil+ or an object type symbol, or should we
|
|
||||||
# allocate an empty Hash since unknown values return +nil+ anyway?
|
|
||||||
out = [ nil ] * 256
|
|
||||||
syntax.each do |tag_class_id, encodings|
|
|
||||||
tag_class = TAG_CLASS[tag_class_id]
|
|
||||||
encodings.each do |encoding_id, classes|
|
|
||||||
encoding = ENCODING_TYPE[encoding_id]
|
|
||||||
object_class = tag_class + encoding
|
|
||||||
classes.each do |number, object_type|
|
|
||||||
out[object_class + number] = object_type
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
out
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
class Net::BER::BerError < RuntimeError; end
|
|
||||||
|
|
||||||
##
|
|
||||||
# An Array object with a BER identifier attached.
|
|
||||||
class Net::BER::BerIdentifiedArray < Array
|
|
||||||
attr_accessor :ber_identifier
|
|
||||||
|
|
||||||
def initialize(*args)
|
|
||||||
super
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# A BER object identifier.
|
|
||||||
class Net::BER::BerIdentifiedOid
|
|
||||||
attr_accessor :ber_identifier
|
|
||||||
|
|
||||||
def initialize(oid)
|
|
||||||
if oid.is_a?(String)
|
|
||||||
oid = oid.split(/\./).map {|s| s.to_i }
|
|
||||||
end
|
|
||||||
@value = oid
|
|
||||||
end
|
|
||||||
|
|
||||||
def to_ber
|
|
||||||
to_ber_oid
|
|
||||||
end
|
|
||||||
|
|
||||||
def to_ber_oid
|
|
||||||
@value.to_ber_oid
|
|
||||||
end
|
|
||||||
|
|
||||||
def to_s
|
|
||||||
@value.join(".")
|
|
||||||
end
|
|
||||||
|
|
||||||
def to_arr
|
|
||||||
@value.dup
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# A String object with a BER identifier attached.
|
|
||||||
class Net::BER::BerIdentifiedString < String
|
|
||||||
attr_accessor :ber_identifier
|
|
||||||
def initialize args
|
|
||||||
super args
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
module Net::BER
|
|
||||||
##
|
|
||||||
# A BER null object.
|
|
||||||
class BerIdentifiedNull
|
|
||||||
attr_accessor :ber_identifier
|
|
||||||
def to_ber
|
|
||||||
"\005\000"
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# The default BerIdentifiedNull object.
|
|
||||||
Null = Net::BER::BerIdentifiedNull.new
|
|
||||||
end
|
|
||||||
|
|
||||||
require 'net/ber/core_ext'
|
|
|
@ -1,168 +0,0 @@
|
||||||
# -*- ruby encoding: utf-8 -*-
|
|
||||||
require 'stringio'
|
|
||||||
|
|
||||||
# Implements Basic Encoding Rules parsing to be mixed into types as needed.
|
|
||||||
module Net::BER::BERParser
|
|
||||||
primitive = {
|
|
||||||
1 => :boolean,
|
|
||||||
2 => :integer,
|
|
||||||
4 => :string,
|
|
||||||
5 => :null,
|
|
||||||
6 => :oid,
|
|
||||||
10 => :integer,
|
|
||||||
13 => :string # (relative OID)
|
|
||||||
}
|
|
||||||
constructed = {
|
|
||||||
16 => :array,
|
|
||||||
17 => :array
|
|
||||||
}
|
|
||||||
universal = { :primitive => primitive, :constructed => constructed }
|
|
||||||
|
|
||||||
primitive = { 10 => :integer }
|
|
||||||
context = { :primitive => primitive }
|
|
||||||
|
|
||||||
# The universal, built-in ASN.1 BER syntax.
|
|
||||||
BuiltinSyntax = Net::BER.compile_syntax(:universal => universal,
|
|
||||||
:context_specific => context)
|
|
||||||
|
|
||||||
##
|
|
||||||
# This is an extract of our BER object parsing to simplify our
|
|
||||||
# understanding of how we parse basic BER object types.
|
|
||||||
def parse_ber_object(syntax, id, data)
|
|
||||||
# Find the object type from either the provided syntax lookup table or
|
|
||||||
# the built-in syntax lookup table.
|
|
||||||
#
|
|
||||||
# This exceptionally clever bit of code is verrrry slow.
|
|
||||||
object_type = (syntax && syntax[id]) || BuiltinSyntax[id]
|
|
||||||
|
|
||||||
# == is expensive so sort this so the common cases are at the top.
|
|
||||||
if object_type == :string
|
|
||||||
s = Net::BER::BerIdentifiedString.new(data || "")
|
|
||||||
s.ber_identifier = id
|
|
||||||
s
|
|
||||||
elsif object_type == :integer
|
|
||||||
j = 0
|
|
||||||
data.each_byte { |b| j = (j << 8) + b }
|
|
||||||
j
|
|
||||||
elsif object_type == :oid
|
|
||||||
# See X.690 pgh 8.19 for an explanation of this algorithm.
|
|
||||||
# This is potentially not good enough. We may need a
|
|
||||||
# BerIdentifiedOid as a subclass of BerIdentifiedArray, to
|
|
||||||
# get the ber identifier and also a to_s method that produces
|
|
||||||
# the familiar dotted notation.
|
|
||||||
oid = data.unpack("w*")
|
|
||||||
f = oid.shift
|
|
||||||
g = if f < 40
|
|
||||||
[0, f]
|
|
||||||
elsif f < 80
|
|
||||||
[1, f - 40]
|
|
||||||
else
|
|
||||||
# f - 80 can easily be > 80. What a weird optimization.
|
|
||||||
[2, f - 80]
|
|
||||||
end
|
|
||||||
oid.unshift g.last
|
|
||||||
oid.unshift g.first
|
|
||||||
# Net::BER::BerIdentifiedOid.new(oid)
|
|
||||||
oid
|
|
||||||
elsif object_type == :array
|
|
||||||
seq = Net::BER::BerIdentifiedArray.new
|
|
||||||
seq.ber_identifier = id
|
|
||||||
sio = StringIO.new(data || "")
|
|
||||||
# Interpret the subobject, but note how the loop is built:
|
|
||||||
# nil ends the loop, but false (a valid BER value) does not!
|
|
||||||
while (e = sio.read_ber(syntax)) != nil
|
|
||||||
seq << e
|
|
||||||
end
|
|
||||||
seq
|
|
||||||
elsif object_type == :boolean
|
|
||||||
data != "\000"
|
|
||||||
elsif object_type == :null
|
|
||||||
n = Net::BER::BerIdentifiedNull.new
|
|
||||||
n.ber_identifier = id
|
|
||||||
n
|
|
||||||
else
|
|
||||||
raise Net::BER::BerError, "Unsupported object type: id=#{id}"
|
|
||||||
end
|
|
||||||
end
|
|
||||||
private :parse_ber_object
|
|
||||||
|
|
||||||
##
|
|
||||||
# This is an extract of how our BER object length parsing is done to
|
|
||||||
# simplify the primary call. This is defined in X.690 section 8.1.3.
|
|
||||||
#
|
|
||||||
# The BER length will either be a single byte or up to 126 bytes in
|
|
||||||
# length. There is a special case of a BER length indicating that the
|
|
||||||
# content-length is undefined and will be identified by the presence of
|
|
||||||
# two null values (0x00 0x00).
|
|
||||||
#
|
|
||||||
# <table>
|
|
||||||
# <tr>
|
|
||||||
# <th>Range</th>
|
|
||||||
# <th>Length</th>
|
|
||||||
# </tr>
|
|
||||||
# <tr>
|
|
||||||
# <th>0x00 -- 0x7f<br />0b00000000 -- 0b01111111</th>
|
|
||||||
# <td>0 - 127 bytes</td>
|
|
||||||
# </tr>
|
|
||||||
# <tr>
|
|
||||||
# <th>0x80<br />0b10000000</th>
|
|
||||||
# <td>Indeterminate (end-of-content marker required)</td>
|
|
||||||
# </tr>
|
|
||||||
# <tr>
|
|
||||||
# <th>0x81 -- 0xfe<br />0b10000001 -- 0b11111110</th>
|
|
||||||
# <td>1 - 126 bytes of length as an integer value</td>
|
|
||||||
# </tr>
|
|
||||||
# <tr>
|
|
||||||
# <th>0xff<br />0b11111111</th>
|
|
||||||
# <td>Illegal (reserved for future expansion)</td>
|
|
||||||
# </tr>
|
|
||||||
# </table>
|
|
||||||
#
|
|
||||||
#--
|
|
||||||
# This has been modified from the version that was previously inside
|
|
||||||
# #read_ber to handle both the indeterminate terminator case and the
|
|
||||||
# invalid BER length case. Because the "lengthlength" value was not used
|
|
||||||
# inside of #read_ber, we no longer return it.
|
|
||||||
def read_ber_length
|
|
||||||
n = getbyte
|
|
||||||
|
|
||||||
if n <= 0x7f
|
|
||||||
n
|
|
||||||
elsif n == 0x80
|
|
||||||
-1
|
|
||||||
elsif n == 0xff
|
|
||||||
raise Net::BER::BerError, "Invalid BER length 0xFF detected."
|
|
||||||
else
|
|
||||||
v = 0
|
|
||||||
read(n & 0x7f).each_byte do |b|
|
|
||||||
v = (v << 8) + b
|
|
||||||
end
|
|
||||||
|
|
||||||
v
|
|
||||||
end
|
|
||||||
end
|
|
||||||
private :read_ber_length
|
|
||||||
|
|
||||||
##
|
|
||||||
# Reads a BER object from the including object. Requires that #getbyte is
|
|
||||||
# implemented on the including object and that it returns a Fixnum value.
|
|
||||||
# Also requires #read(bytes) to work.
|
|
||||||
#
|
|
||||||
# This does not work with non-blocking I/O.
|
|
||||||
def read_ber(syntax = nil)
|
|
||||||
# TODO: clean this up so it works properly with partial packets coming
|
|
||||||
# from streams that don't block when we ask for more data (like
|
|
||||||
# StringIOs). At it is, this can throw TypeErrors and other nasties.
|
|
||||||
|
|
||||||
id = getbyte or return nil # don't trash this value, we'll use it later
|
|
||||||
content_length = read_ber_length
|
|
||||||
|
|
||||||
if -1 == content_length
|
|
||||||
raise Net::BER::BerError, "Indeterminite BER content length not implemented."
|
|
||||||
else
|
|
||||||
data = read(content_length)
|
|
||||||
end
|
|
||||||
|
|
||||||
parse_ber_object(syntax, id, data)
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,62 +0,0 @@
|
||||||
# -*- ruby encoding: utf-8 -*-
|
|
||||||
require 'net/ber/ber_parser'
|
|
||||||
# :stopdoc:
|
|
||||||
class IO
|
|
||||||
include Net::BER::BERParser
|
|
||||||
end
|
|
||||||
|
|
||||||
class StringIO
|
|
||||||
include Net::BER::BERParser
|
|
||||||
end
|
|
||||||
|
|
||||||
if defined? ::OpenSSL
|
|
||||||
class OpenSSL::SSL::SSLSocket
|
|
||||||
include Net::BER::BERParser
|
|
||||||
end
|
|
||||||
end
|
|
||||||
# :startdoc:
|
|
||||||
|
|
||||||
module Net::BER::Extensions # :nodoc:
|
|
||||||
end
|
|
||||||
|
|
||||||
require 'net/ber/core_ext/string'
|
|
||||||
# :stopdoc:
|
|
||||||
class String
|
|
||||||
include Net::BER::BERParser
|
|
||||||
include Net::BER::Extensions::String
|
|
||||||
end
|
|
||||||
|
|
||||||
require 'net/ber/core_ext/array'
|
|
||||||
# :stopdoc:
|
|
||||||
class Array
|
|
||||||
include Net::BER::Extensions::Array
|
|
||||||
end
|
|
||||||
# :startdoc:
|
|
||||||
|
|
||||||
require 'net/ber/core_ext/bignum'
|
|
||||||
# :stopdoc:
|
|
||||||
class Bignum
|
|
||||||
include Net::BER::Extensions::Bignum
|
|
||||||
end
|
|
||||||
# :startdoc:
|
|
||||||
|
|
||||||
require 'net/ber/core_ext/fixnum'
|
|
||||||
# :stopdoc:
|
|
||||||
class Fixnum
|
|
||||||
include Net::BER::Extensions::Fixnum
|
|
||||||
end
|
|
||||||
# :startdoc:
|
|
||||||
|
|
||||||
require 'net/ber/core_ext/true_class'
|
|
||||||
# :stopdoc:
|
|
||||||
class TrueClass
|
|
||||||
include Net::BER::Extensions::TrueClass
|
|
||||||
end
|
|
||||||
# :startdoc:
|
|
||||||
|
|
||||||
require 'net/ber/core_ext/false_class'
|
|
||||||
# :stopdoc:
|
|
||||||
class FalseClass
|
|
||||||
include Net::BER::Extensions::FalseClass
|
|
||||||
end
|
|
||||||
# :startdoc:
|
|
|
@ -1,82 +0,0 @@
|
||||||
# -*- ruby encoding: utf-8 -*-
|
|
||||||
##
|
|
||||||
# BER extensions to the Array class.
|
|
||||||
module Net::BER::Extensions::Array
|
|
||||||
##
|
|
||||||
# Converts an Array to a BER sequence. All values in the Array are
|
|
||||||
# expected to be in BER format prior to calling this method.
|
|
||||||
def to_ber(id = 0)
|
|
||||||
# The universal sequence tag 0x30 is composed of the base tag value
|
|
||||||
# (0x10) and the constructed flag (0x20).
|
|
||||||
to_ber_seq_internal(0x30 + id)
|
|
||||||
end
|
|
||||||
alias_method :to_ber_sequence, :to_ber
|
|
||||||
|
|
||||||
##
|
|
||||||
# Converts an Array to a BER set. All values in the Array are expected to
|
|
||||||
# be in BER format prior to calling this method.
|
|
||||||
def to_ber_set(id = 0)
|
|
||||||
# The universal set tag 0x31 is composed of the base tag value (0x11)
|
|
||||||
# and the constructed flag (0x20).
|
|
||||||
to_ber_seq_internal(0x31 + id)
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Converts an Array to an application-specific sequence, assigned a tag
|
|
||||||
# value that is meaningful to the particular protocol being used. All
|
|
||||||
# values in the Array are expected to be in BER format pr prior to calling
|
|
||||||
# this method.
|
|
||||||
#--
|
|
||||||
# Implementor's note 20100320(AZ): RFC 4511 (the LDAPv3 protocol) as well
|
|
||||||
# as earlier RFCs 1777 and 2559 seem to indicate that LDAP only has
|
|
||||||
# application constructed sequences (0x60). However, ldapsearch sends some
|
|
||||||
# context-specific constructed sequences (0xA0); other clients may do the
|
|
||||||
# same. This behaviour appears to violate the RFCs. In real-world
|
|
||||||
# practice, we may need to change calls of #to_ber_appsequence to
|
|
||||||
# #to_ber_contextspecific for full LDAP server compatibility.
|
|
||||||
#
|
|
||||||
# This note probably belongs elsewhere.
|
|
||||||
#++
|
|
||||||
def to_ber_appsequence(id = 0)
|
|
||||||
# The application sequence tag always starts from the application flag
|
|
||||||
# (0x40) and the constructed flag (0x20).
|
|
||||||
to_ber_seq_internal(0x60 + id)
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Converts an Array to a context-specific sequence, assigned a tag value
|
|
||||||
# that is meaningful to the particular context of the particular protocol
|
|
||||||
# being used. All values in the Array are expected to be in BER format
|
|
||||||
# prior to calling this method.
|
|
||||||
def to_ber_contextspecific(id = 0)
|
|
||||||
# The application sequence tag always starts from the context flag
|
|
||||||
# (0x80) and the constructed flag (0x20).
|
|
||||||
to_ber_seq_internal(0xa0 + id)
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# The internal sequence packing routine. All values in the Array are
|
|
||||||
# expected to be in BER format prior to calling this method.
|
|
||||||
def to_ber_seq_internal(code)
|
|
||||||
s = self.join
|
|
||||||
[code].pack('C') + s.length.to_ber_length_encoding + s
|
|
||||||
end
|
|
||||||
private :to_ber_seq_internal
|
|
||||||
|
|
||||||
##
|
|
||||||
# SNMP Object Identifiers (OID) are special arrays
|
|
||||||
#--
|
|
||||||
# 20100320 AZ: I do not think that this method should be in BER, since
|
|
||||||
# this appears to be SNMP-specific. This should probably be subsumed by a
|
|
||||||
# proper SNMP OID object.
|
|
||||||
#++
|
|
||||||
def to_ber_oid
|
|
||||||
ary = self.dup
|
|
||||||
first = ary.shift
|
|
||||||
raise Net::BER::BerError, "Invalid OID" unless [0, 1, 2].include?(first)
|
|
||||||
first = first * 40 + ary.shift
|
|
||||||
ary.unshift first
|
|
||||||
oid = ary.pack("w*")
|
|
||||||
[6, oid.length].pack("CC") + oid
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,22 +0,0 @@
|
||||||
# -*- ruby encoding: utf-8 -*-
|
|
||||||
##
|
|
||||||
# BER extensions to the Bignum class.
|
|
||||||
module Net::BER::Extensions::Bignum
|
|
||||||
##
|
|
||||||
# Converts a Bignum to an uncompressed BER integer.
|
|
||||||
def to_ber
|
|
||||||
result = []
|
|
||||||
|
|
||||||
# NOTE: Array#pack's 'w' is a BER _compressed_ integer. We need
|
|
||||||
# uncompressed BER integers, so we're not using that. See also:
|
|
||||||
# http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/228864
|
|
||||||
n = self
|
|
||||||
while n > 0
|
|
||||||
b = n & 0xff
|
|
||||||
result << b
|
|
||||||
n = n >> 8
|
|
||||||
end
|
|
||||||
|
|
||||||
"\002" + ([result.size] + result.reverse).pack('C*')
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,10 +0,0 @@
|
||||||
# -*- ruby encoding: utf-8 -*-
|
|
||||||
##
|
|
||||||
# BER extensions to +false+.
|
|
||||||
module Net::BER::Extensions::FalseClass
|
|
||||||
##
|
|
||||||
# Converts +false+ to the BER wireline representation of +false+.
|
|
||||||
def to_ber
|
|
||||||
"\001\001\000"
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,66 +0,0 @@
|
||||||
# -*- ruby encoding: utf-8 -*-
|
|
||||||
##
|
|
||||||
# Ber extensions to the Fixnum class.
|
|
||||||
module Net::BER::Extensions::Fixnum
|
|
||||||
##
|
|
||||||
# Converts the fixnum to BER format.
|
|
||||||
def to_ber
|
|
||||||
"\002#{to_ber_internal}"
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Converts the fixnum to BER enumerated format.
|
|
||||||
def to_ber_enumerated
|
|
||||||
"\012#{to_ber_internal}"
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Converts the fixnum to BER length encodining format.
|
|
||||||
def to_ber_length_encoding
|
|
||||||
if self <= 127
|
|
||||||
[self].pack('C')
|
|
||||||
else
|
|
||||||
i = [self].pack('N').sub(/^[\0]+/,"")
|
|
||||||
[0x80 + i.length].pack('C') + i
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Generate a BER-encoding for an application-defined INTEGER. Examples of
|
|
||||||
# such integers are SNMP's Counter, Gauge, and TimeTick types.
|
|
||||||
def to_ber_application(tag)
|
|
||||||
[0x40 + tag].pack("C") + to_ber_internal
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Used to BER-encode the length and content bytes of a Fixnum. Callers
|
|
||||||
# must prepend the tag byte for the contained value.
|
|
||||||
def to_ber_internal
|
|
||||||
# CAUTION: Bit twiddling ahead. You might want to shield your eyes or
|
|
||||||
# something.
|
|
||||||
|
|
||||||
# Looks for the first byte in the fixnum that is not all zeroes. It does
|
|
||||||
# this by masking one byte after another, checking the result for bits
|
|
||||||
# that are left on.
|
|
||||||
size = Net::BER::MAX_FIXNUM_SIZE
|
|
||||||
while size > 1
|
|
||||||
break if (self & (0xff << (size - 1) * 8)) > 0
|
|
||||||
size -= 1
|
|
||||||
end
|
|
||||||
|
|
||||||
# Store the size of the fixnum in the result
|
|
||||||
result = [size]
|
|
||||||
|
|
||||||
# Appends bytes to result, starting with higher orders first. Extraction
|
|
||||||
# of bytes is done by right shifting the original fixnum by an amount
|
|
||||||
# and then masking that with 0xff.
|
|
||||||
while size > 0
|
|
||||||
# right shift size - 1 bytes, mask with 0xff
|
|
||||||
result << ((self >> ((size - 1) * 8)) & 0xff)
|
|
||||||
size -= 1
|
|
||||||
end
|
|
||||||
|
|
||||||
result.pack('C*')
|
|
||||||
end
|
|
||||||
private :to_ber_internal
|
|
||||||
end
|
|
|
@ -1,48 +0,0 @@
|
||||||
# -*- ruby encoding: utf-8 -*-
|
|
||||||
require 'stringio'
|
|
||||||
|
|
||||||
##
|
|
||||||
# BER extensions to the String class.
|
|
||||||
module Net::BER::Extensions::String
|
|
||||||
##
|
|
||||||
# Converts a string to a BER string. Universal octet-strings are tagged
|
|
||||||
# with 0x04, but other values are possible depending on the context, so we
|
|
||||||
# let the caller give us one.
|
|
||||||
#
|
|
||||||
# User code should call either #to_ber_application_string or
|
|
||||||
# #to_ber_contextspecific.
|
|
||||||
def to_ber(code = 0x04)
|
|
||||||
[code].pack('C') + length.to_ber_length_encoding + self
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Creates an application-specific BER string encoded value with the
|
|
||||||
# provided syntax code value.
|
|
||||||
def to_ber_application_string(code)
|
|
||||||
to_ber(0x40 + code)
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Creates a context-specific BER string encoded value with the provided
|
|
||||||
# syntax code value.
|
|
||||||
def to_ber_contextspecific(code)
|
|
||||||
to_ber(0x80 + code)
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Nondestructively reads a BER object from this string.
|
|
||||||
def read_ber(syntax = nil)
|
|
||||||
StringIO.new(self).read_ber(syntax)
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Destructively reads a BER object from the string.
|
|
||||||
def read_ber!(syntax = nil)
|
|
||||||
io = StringIO.new(self)
|
|
||||||
|
|
||||||
result = io.read_ber(syntax)
|
|
||||||
self.slice!(0...io.pos)
|
|
||||||
|
|
||||||
return result
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,12 +0,0 @@
|
||||||
# -*- ruby encoding: utf-8 -*-
|
|
||||||
##
|
|
||||||
# BER extensions to +true+.
|
|
||||||
module Net::BER::Extensions::TrueClass
|
|
||||||
##
|
|
||||||
# Converts +true+ to the BER wireline representation of +true+.
|
|
||||||
def to_ber
|
|
||||||
# 20100319 AZ: Note that this may not be the completely correct value,
|
|
||||||
# per some test documentation. We need to determine the truth of this.
|
|
||||||
"\001\001\001"
|
|
||||||
end
|
|
||||||
end
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1,154 +0,0 @@
|
||||||
# -*- ruby encoding: utf-8 -*-
|
|
||||||
##
|
|
||||||
# An LDAP Dataset. Used primarily as an intermediate format for converting
|
|
||||||
# to and from LDIF strings and Net::LDAP::Entry objects.
|
|
||||||
class Net::LDAP::Dataset < Hash
|
|
||||||
##
|
|
||||||
# Dataset object comments.
|
|
||||||
attr_reader :comments
|
|
||||||
|
|
||||||
def initialize(*args, &block) # :nodoc:
|
|
||||||
super
|
|
||||||
@comments = []
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Outputs an LDAP Dataset as an array of strings representing LDIF
|
|
||||||
# entries.
|
|
||||||
def to_ldif
|
|
||||||
ary = []
|
|
||||||
ary += @comments unless @comments.empty?
|
|
||||||
keys.sort.each do |dn|
|
|
||||||
ary << "dn: #{dn}"
|
|
||||||
|
|
||||||
attributes = self[dn].keys.map { |attr| attr.to_s }.sort
|
|
||||||
attributes.each do |attr|
|
|
||||||
self[dn][attr.to_sym].each do |value|
|
|
||||||
if attr == "userpassword" or value_is_binary?(value)
|
|
||||||
value = [value].pack("m").chomp.gsub(/\n/m, "\n ")
|
|
||||||
ary << "#{attr}:: #{value}"
|
|
||||||
else
|
|
||||||
ary << "#{attr}: #{value}"
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
ary << ""
|
|
||||||
end
|
|
||||||
block_given? and ary.each { |line| yield line}
|
|
||||||
|
|
||||||
ary
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Outputs an LDAP Dataset as an LDIF string.
|
|
||||||
def to_ldif_string
|
|
||||||
to_ldif.join("\n")
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Convert the parsed LDIF objects to Net::LDAP::Entry objects.
|
|
||||||
def to_entries
|
|
||||||
ary = []
|
|
||||||
keys.each do |dn|
|
|
||||||
entry = Net::LDAP::Entry.new(dn)
|
|
||||||
self[dn].each do |attr, value|
|
|
||||||
entry[attr] = value
|
|
||||||
end
|
|
||||||
ary << entry
|
|
||||||
end
|
|
||||||
ary
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# This is an internal convenience method to determine if a value requires
|
|
||||||
# base64-encoding before conversion to LDIF output. The standard approach
|
|
||||||
# in most LDAP tools is to check whether the value is a password, or if
|
|
||||||
# the first or last bytes are non-printable. Microsoft Active Directory,
|
|
||||||
# on the other hand, sometimes sends values that are binary in the middle.
|
|
||||||
#
|
|
||||||
# In the worst cases, this could be a nasty performance killer, which is
|
|
||||||
# why we handle the simplest cases first. Ideally, we would also test the
|
|
||||||
# first/last byte, but it's a bit harder to do this in a way that's
|
|
||||||
# compatible with both 1.8.6 and 1.8.7.
|
|
||||||
def value_is_binary?(value) # :nodoc:
|
|
||||||
value = value.to_s
|
|
||||||
return true if value[0] == ?: or value[0] == ?<
|
|
||||||
value.each_byte { |byte| return true if (byte < 32) || (byte > 126) }
|
|
||||||
false
|
|
||||||
end
|
|
||||||
private :value_is_binary?
|
|
||||||
|
|
||||||
class << self
|
|
||||||
class ChompedIO # :nodoc:
|
|
||||||
def initialize(io)
|
|
||||||
@io = io
|
|
||||||
end
|
|
||||||
def gets
|
|
||||||
s = @io.gets
|
|
||||||
s.chomp if s
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Creates a Dataset object from an Entry object. Used mostly to assist
|
|
||||||
# with the conversion of
|
|
||||||
def from_entry(entry)
|
|
||||||
dataset = Net::LDAP::Dataset.new
|
|
||||||
hash = { }
|
|
||||||
entry.each_attribute do |attribute, value|
|
|
||||||
next if attribute == :dn
|
|
||||||
hash[attribute] = value
|
|
||||||
end
|
|
||||||
dataset[entry.dn] = hash
|
|
||||||
dataset
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Reads an object that returns data line-wise (using #gets) and parses
|
|
||||||
# LDIF data into a Dataset object.
|
|
||||||
def read_ldif(io)
|
|
||||||
ds = Net::LDAP::Dataset.new
|
|
||||||
io = ChompedIO.new(io)
|
|
||||||
|
|
||||||
line = io.gets
|
|
||||||
dn = nil
|
|
||||||
|
|
||||||
while line
|
|
||||||
new_line = io.gets
|
|
||||||
|
|
||||||
if new_line =~ /^[\s]+/
|
|
||||||
line << " " << $'
|
|
||||||
else
|
|
||||||
nextline = new_line
|
|
||||||
|
|
||||||
if line =~ /^#/
|
|
||||||
ds.comments << line
|
|
||||||
yield :comment, line if block_given?
|
|
||||||
elsif line =~ /^dn:[\s]*/i
|
|
||||||
dn = $'
|
|
||||||
ds[dn] = Hash.new { |k,v| k[v] = [] }
|
|
||||||
yield :dn, dn if block_given?
|
|
||||||
elsif line.empty?
|
|
||||||
dn = nil
|
|
||||||
yield :end, nil if block_given?
|
|
||||||
elsif line =~ /^([^:]+):([\:]?)[\s]*/
|
|
||||||
# $1 is the attribute name
|
|
||||||
# $2 is a colon iff the attr-value is base-64 encoded
|
|
||||||
# $' is the attr-value
|
|
||||||
# Avoid the Base64 class because not all Ruby versions have it.
|
|
||||||
attrvalue = ($2 == ":") ? $'.unpack('m').shift : $'
|
|
||||||
ds[dn][$1.downcase.to_sym] << attrvalue
|
|
||||||
yield :attr, [$1.downcase.to_sym, attrvalue] if block_given?
|
|
||||||
end
|
|
||||||
|
|
||||||
line = nextline
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
ds
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
require 'net/ldap/entry' unless defined? Net::LDAP::Entry
|
|
|
@ -1,225 +0,0 @@
|
||||||
# -*- ruby encoding: utf-8 -*-
|
|
||||||
|
|
||||||
##
|
|
||||||
# Objects of this class represent an LDAP DN ("Distinguished Name"). A DN
|
|
||||||
# ("Distinguished Name") is a unique identifier for an entry within an LDAP
|
|
||||||
# directory. It is made up of a number of other attributes strung together,
|
|
||||||
# to identify the entry in the tree.
|
|
||||||
#
|
|
||||||
# Each attribute that makes up a DN needs to have its value escaped so that
|
|
||||||
# the DN is valid. This class helps take care of that.
|
|
||||||
#
|
|
||||||
# A fully escaped DN needs to be unescaped when analysing its contents. This
|
|
||||||
# class also helps take care of that.
|
|
||||||
class Net::LDAP::DN
|
|
||||||
##
|
|
||||||
# Initialize a DN, escaping as required. Pass in attributes in name/value
|
|
||||||
# pairs. If there is a left over argument, it will be appended to the dn
|
|
||||||
# without escaping (useful for a base string).
|
|
||||||
#
|
|
||||||
# Most uses of this class will be to escape a DN, rather than to parse it,
|
|
||||||
# so storing the dn as an escaped String and parsing parts as required
|
|
||||||
# with a state machine seems sensible.
|
|
||||||
def initialize(*args)
|
|
||||||
buffer = StringIO.new
|
|
||||||
|
|
||||||
args.each_index do |index|
|
|
||||||
buffer << "=" if index % 2 == 1
|
|
||||||
buffer << "," if index % 2 == 0 && index != 0
|
|
||||||
|
|
||||||
if index < args.length - 1 || index % 2 == 1
|
|
||||||
buffer << Net::LDAP::DN.escape(args[index])
|
|
||||||
else
|
|
||||||
buffer << args[index]
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
@dn = buffer.string
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Parse a DN into key value pairs using ASN from
|
|
||||||
# http://tools.ietf.org/html/rfc2253 section 3.
|
|
||||||
def each_pair
|
|
||||||
state = :key
|
|
||||||
key = StringIO.new
|
|
||||||
value = StringIO.new
|
|
||||||
hex_buffer = ""
|
|
||||||
|
|
||||||
@dn.each_char do |char|
|
|
||||||
case state
|
|
||||||
when :key then
|
|
||||||
case char
|
|
||||||
when 'a'..'z', 'A'..'Z' then
|
|
||||||
state = :key_normal
|
|
||||||
key << char
|
|
||||||
when '0'..'9' then
|
|
||||||
state = :key_oid
|
|
||||||
key << char
|
|
||||||
when ' ' then state = :key
|
|
||||||
else raise "DN badly formed"
|
|
||||||
end
|
|
||||||
when :key_normal then
|
|
||||||
case char
|
|
||||||
when '=' then state = :value
|
|
||||||
when 'a'..'z', 'A'..'Z', '0'..'9', '-', ' ' then key << char
|
|
||||||
else raise "DN badly formed"
|
|
||||||
end
|
|
||||||
when :key_oid then
|
|
||||||
case char
|
|
||||||
when '=' then state = :value
|
|
||||||
when '0'..'9', '.', ' ' then key << char
|
|
||||||
else raise "DN badly formed"
|
|
||||||
end
|
|
||||||
when :value then
|
|
||||||
case char
|
|
||||||
when '\\' then state = :value_normal_escape
|
|
||||||
when '"' then state = :value_quoted
|
|
||||||
when ' ' then state = :value
|
|
||||||
when '#' then
|
|
||||||
state = :value_hexstring
|
|
||||||
value << char
|
|
||||||
when ',' then
|
|
||||||
state = :key
|
|
||||||
yield key.string.strip, value.string.rstrip
|
|
||||||
key = StringIO.new
|
|
||||||
value = StringIO.new;
|
|
||||||
else
|
|
||||||
state = :value_normal
|
|
||||||
value << char
|
|
||||||
end
|
|
||||||
when :value_normal then
|
|
||||||
case char
|
|
||||||
when '\\' then state = :value_normal_escape
|
|
||||||
when ',' then
|
|
||||||
state = :key
|
|
||||||
yield key.string.strip, value.string.rstrip
|
|
||||||
key = StringIO.new
|
|
||||||
value = StringIO.new;
|
|
||||||
else value << char
|
|
||||||
end
|
|
||||||
when :value_normal_escape then
|
|
||||||
case char
|
|
||||||
when '0'..'9', 'a'..'f', 'A'..'F' then
|
|
||||||
state = :value_normal_escape_hex
|
|
||||||
hex_buffer = char
|
|
||||||
else state = :value_normal; value << char
|
|
||||||
end
|
|
||||||
when :value_normal_escape_hex then
|
|
||||||
case char
|
|
||||||
when '0'..'9', 'a'..'f', 'A'..'F' then
|
|
||||||
state = :value_normal
|
|
||||||
value << "#{hex_buffer}#{char}".to_i(16).chr
|
|
||||||
else raise "DN badly formed"
|
|
||||||
end
|
|
||||||
when :value_quoted then
|
|
||||||
case char
|
|
||||||
when '\\' then state = :value_quoted_escape
|
|
||||||
when '"' then state = :value_end
|
|
||||||
else value << char
|
|
||||||
end
|
|
||||||
when :value_quoted_escape then
|
|
||||||
case char
|
|
||||||
when '0'..'9', 'a'..'f', 'A'..'F' then
|
|
||||||
state = :value_quoted_escape_hex
|
|
||||||
hex_buffer = char
|
|
||||||
else
|
|
||||||
state = :value_quoted;
|
|
||||||
value << char
|
|
||||||
end
|
|
||||||
when :value_quoted_escape_hex then
|
|
||||||
case char
|
|
||||||
when '0'..'9', 'a'..'f', 'A'..'F' then
|
|
||||||
state = :value_quoted
|
|
||||||
value << "#{hex_buffer}#{char}".to_i(16).chr
|
|
||||||
else raise "DN badly formed"
|
|
||||||
end
|
|
||||||
when :value_hexstring then
|
|
||||||
case char
|
|
||||||
when '0'..'9', 'a'..'f', 'A'..'F' then
|
|
||||||
state = :value_hexstring_hex
|
|
||||||
value << char
|
|
||||||
when ' ' then state = :value_end
|
|
||||||
when ',' then
|
|
||||||
state = :key
|
|
||||||
yield key.string.strip, value.string.rstrip
|
|
||||||
key = StringIO.new
|
|
||||||
value = StringIO.new;
|
|
||||||
else raise "DN badly formed"
|
|
||||||
end
|
|
||||||
when :value_hexstring_hex then
|
|
||||||
case char
|
|
||||||
when '0'..'9', 'a'..'f', 'A'..'F' then
|
|
||||||
state = :value_hexstring
|
|
||||||
value << char
|
|
||||||
else raise "DN badly formed"
|
|
||||||
end
|
|
||||||
when :value_end then
|
|
||||||
case char
|
|
||||||
when ' ' then state = :value_end
|
|
||||||
when ',' then
|
|
||||||
state = :key
|
|
||||||
yield key.string.strip, value.string.rstrip
|
|
||||||
key = StringIO.new
|
|
||||||
value = StringIO.new;
|
|
||||||
else raise "DN badly formed"
|
|
||||||
end
|
|
||||||
else raise "Fell out of state machine"
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Last pair
|
|
||||||
if [:value, :value_normal, :value_hexstring, :value_end].include? state
|
|
||||||
yield key.string.strip, value.string.rstrip
|
|
||||||
else
|
|
||||||
raise "DN badly formed"
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Returns the DN as an array in the form expected by the constructor.
|
|
||||||
def to_a
|
|
||||||
a = []
|
|
||||||
self.each_pair { |key, value| a << key << value }
|
|
||||||
a
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Return the DN as an escaped string.
|
|
||||||
def to_s
|
|
||||||
@dn
|
|
||||||
end
|
|
||||||
|
|
||||||
# http://tools.ietf.org/html/rfc2253 section 2.4 lists these exceptions
|
|
||||||
# for dn values. All of the following must be escaped in any normal string
|
|
||||||
# using a single backslash ('\') as escape.
|
|
||||||
ESCAPES = {
|
|
||||||
',' => ',',
|
|
||||||
'+' => '+',
|
|
||||||
'"' => '"',
|
|
||||||
'\\' => '\\',
|
|
||||||
'<' => '<',
|
|
||||||
'>' => '>',
|
|
||||||
';' => ';',
|
|
||||||
}
|
|
||||||
|
|
||||||
# Compiled character class regexp using the keys from the above hash, and
|
|
||||||
# checking for a space or # at the start, or space at the end, of the
|
|
||||||
# string.
|
|
||||||
ESCAPE_RE = Regexp.new("(^ |^#| $|[" +
|
|
||||||
ESCAPES.keys.map { |e| Regexp.escape(e) }.join +
|
|
||||||
"])")
|
|
||||||
|
|
||||||
##
|
|
||||||
# Escape a string for use in a DN value
|
|
||||||
def self.escape(string)
|
|
||||||
string.gsub(ESCAPE_RE) { |char| "\\" + ESCAPES[char] }
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Proxy all other requests to the string object, because a DN is mainly
|
|
||||||
# used within the library as a string
|
|
||||||
def method_missing(method, *args, &block)
|
|
||||||
@dn.send(method, *args, &block)
|
|
||||||
end
|
|
||||||
end
|
|
|
@ -1,185 +0,0 @@
|
||||||
# -*- ruby encoding: utf-8 -*-
|
|
||||||
##
|
|
||||||
# Objects of this class represent individual entries in an LDAP directory.
|
|
||||||
# User code generally does not instantiate this class. Net::LDAP#search
|
|
||||||
# provides objects of this class to user code, either as block parameters or
|
|
||||||
# as return values.
|
|
||||||
#
|
|
||||||
# In LDAP-land, an "entry" is a collection of attributes that are uniquely
|
|
||||||
# and globally identified by a DN ("Distinguished Name"). Attributes are
|
|
||||||
# identified by short, descriptive words or phrases. Although a directory is
|
|
||||||
# free to implement any attribute name, most of them follow rigorous
|
|
||||||
# standards so that the range of commonly-encountered attribute names is not
|
|
||||||
# large.
|
|
||||||
#
|
|
||||||
# An attribute name is case-insensitive. Most directories also restrict the
|
|
||||||
# range of characters allowed in attribute names. To simplify handling
|
|
||||||
# attribute names, Net::LDAP::Entry internally converts them to a standard
|
|
||||||
# format. Therefore, the methods which take attribute names can take Strings
|
|
||||||
# or Symbols, and work correctly regardless of case or capitalization.
|
|
||||||
#
|
|
||||||
# An attribute consists of zero or more data items called <i>values.</i> An
|
|
||||||
# entry is the combination of a unique DN, a set of attribute names, and a
|
|
||||||
# (possibly-empty) array of values for each attribute.
|
|
||||||
#
|
|
||||||
# Class Net::LDAP::Entry provides convenience methods for dealing with LDAP
|
|
||||||
# entries. In addition to the methods documented below, you may access
|
|
||||||
# individual attributes of an entry simply by giving the attribute name as
|
|
||||||
# the name of a method call. For example:
|
|
||||||
#
|
|
||||||
# ldap.search( ... ) do |entry|
|
|
||||||
# puts "Common name: #{entry.cn}"
|
|
||||||
# puts "Email addresses:"
|
|
||||||
# entry.mail.each {|ma| puts ma}
|
|
||||||
# end
|
|
||||||
#
|
|
||||||
# If you use this technique to access an attribute that is not present in a
|
|
||||||
# particular Entry object, a NoMethodError exception will be raised.
|
|
||||||
#
|
|
||||||
#--
|
|
||||||
# Ugly problem to fix someday: We key off the internal hash with a canonical
|
|
||||||
# form of the attribute name: convert to a string, downcase, then take the
|
|
||||||
# symbol. Unfortunately we do this in at least three places. Should do it in
|
|
||||||
# ONE place.
|
|
||||||
class Net::LDAP::Entry
|
|
||||||
##
|
|
||||||
# This constructor is not generally called by user code.
|
|
||||||
def initialize(dn = nil) #:nodoc:
|
|
||||||
@myhash = {}
|
|
||||||
@myhash[:dn] = [dn]
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Use the LDIF format for Marshal serialization.
|
|
||||||
def _dump(depth) #:nodoc:
|
|
||||||
to_ldif
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Use the LDIF format for Marshal serialization.
|
|
||||||
def self._load(entry) #:nodoc:
|
|
||||||
from_single_ldif_string(entry)
|
|
||||||
end
|
|
||||||
|
|
||||||
class << self
|
|
||||||
##
|
|
||||||
# Converts a single LDIF entry string into an Entry object. Useful for
|
|
||||||
# Marshal serialization. If a string with multiple LDIF entries is
|
|
||||||
# provided, an exception will be raised.
|
|
||||||
def from_single_ldif_string(ldif)
|
|
||||||
ds = Net::LDAP::Dataset.read_ldif(::StringIO.new(ldif))
|
|
||||||
|
|
||||||
return nil if ds.empty?
|
|
||||||
|
|
||||||
raise Net::LDAP::LdapError, "Too many LDIF entries" unless ds.size == 1
|
|
||||||
|
|
||||||
entry = ds.to_entries.first
|
|
||||||
|
|
||||||
return nil if entry.dn.nil?
|
|
||||||
entry
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Canonicalizes an LDAP attribute name as a \Symbol. The name is
|
|
||||||
# lowercased and, if present, a trailing equals sign is removed.
|
|
||||||
def attribute_name(name)
|
|
||||||
name = name.to_s.downcase
|
|
||||||
name = name[0..-2] if name[-1] == ?=
|
|
||||||
name.to_sym
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Sets or replaces the array of values for the provided attribute. The
|
|
||||||
# attribute name is canonicalized prior to assignment.
|
|
||||||
#
|
|
||||||
# When an attribute is set using this, that attribute is now made
|
|
||||||
# accessible through methods as well.
|
|
||||||
#
|
|
||||||
# entry = Net::LDAP::Entry.new("dc=com")
|
|
||||||
# entry.foo # => NoMethodError
|
|
||||||
# entry["foo"] = 12345 # => [12345]
|
|
||||||
# entry.foo # => [12345]
|
|
||||||
def []=(name, value)
|
|
||||||
@myhash[self.class.attribute_name(name)] = Kernel::Array(value)
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Reads the array of values for the provided attribute. The attribute name
|
|
||||||
# is canonicalized prior to reading. Returns an empty array if the
|
|
||||||
# attribute does not exist.
|
|
||||||
def [](name)
|
|
||||||
name = self.class.attribute_name(name)
|
|
||||||
@myhash[name] || []
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Returns the first distinguished name (dn) of the Entry as a \String.
|
|
||||||
def dn
|
|
||||||
self[:dn].first.to_s
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Returns an array of the attribute names present in the Entry.
|
|
||||||
def attribute_names
|
|
||||||
@myhash.keys
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Accesses each of the attributes present in the Entry.
|
|
||||||
#
|
|
||||||
# Calls a user-supplied block with each attribute in turn, passing two
|
|
||||||
# arguments to the block: a Symbol giving the name of the attribute, and a
|
|
||||||
# (possibly empty) \Array of data values.
|
|
||||||
def each # :yields: attribute-name, data-values-array
|
|
||||||
if block_given?
|
|
||||||
attribute_names.each {|a|
|
|
||||||
attr_name,values = a,self[a]
|
|
||||||
yield attr_name, values
|
|
||||||
}
|
|
||||||
end
|
|
||||||
end
|
|
||||||
alias_method :each_attribute, :each
|
|
||||||
|
|
||||||
##
|
|
||||||
# Converts the Entry to an LDIF-formatted String
|
|
||||||
def to_ldif
|
|
||||||
Net::LDAP::Dataset.from_entry(self).to_ldif_string
|
|
||||||
end
|
|
||||||
|
|
||||||
def respond_to?(sym) #:nodoc:
|
|
||||||
return true if valid_attribute?(self.class.attribute_name(sym))
|
|
||||||
return super
|
|
||||||
end
|
|
||||||
|
|
||||||
def method_missing(sym, *args, &block) #:nodoc:
|
|
||||||
name = self.class.attribute_name(sym)
|
|
||||||
|
|
||||||
if valid_attribute?(name )
|
|
||||||
if setter?(sym) && args.size == 1
|
|
||||||
value = args.first
|
|
||||||
value = Array(value)
|
|
||||||
self[name]= value
|
|
||||||
return value
|
|
||||||
elsif args.empty?
|
|
||||||
return self[name]
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
super
|
|
||||||
end
|
|
||||||
|
|
||||||
# Given a valid attribute symbol, returns true.
|
|
||||||
def valid_attribute?(attr_name)
|
|
||||||
attribute_names.include?(attr_name)
|
|
||||||
end
|
|
||||||
private :valid_attribute?
|
|
||||||
|
|
||||||
# Returns true if the symbol ends with an equal sign.
|
|
||||||
def setter?(sym)
|
|
||||||
sym.to_s[-1] == ?=
|
|
||||||
end
|
|
||||||
private :setter?
|
|
||||||
end # class Entry
|
|
||||||
|
|
||||||
require 'net/ldap/dataset' unless defined? Net::LDAP::Dataset
|
|
|
@ -1,759 +0,0 @@
|
||||||
# -*- ruby encoding: utf-8 -*-
|
|
||||||
|
|
||||||
##
|
|
||||||
# Class Net::LDAP::Filter is used to constrain LDAP searches. An object of
|
|
||||||
# this class is passed to Net::LDAP#search in the parameter :filter.
|
|
||||||
#
|
|
||||||
# Net::LDAP::Filter supports the complete set of search filters available in
|
|
||||||
# LDAP, including conjunction, disjunction and negation (AND, OR, and NOT).
|
|
||||||
# This class supplants the (infamous) RFC 2254 standard notation for
|
|
||||||
# specifying LDAP search filters.
|
|
||||||
#--
|
|
||||||
# NOTE: This wording needs to change as we will be supporting LDAPv3 search
|
|
||||||
# filter strings (RFC 4515).
|
|
||||||
#++
|
|
||||||
#
|
|
||||||
# Here's how to code the familiar "objectclass is present" filter:
|
|
||||||
# f = Net::LDAP::Filter.present("objectclass")
|
|
||||||
#
|
|
||||||
# The object returned by this code can be passed directly to the
|
|
||||||
# <tt>:filter</tt> parameter of Net::LDAP#search.
|
|
||||||
#
|
|
||||||
# See the individual class and instance methods below for more examples.
|
|
||||||
class Net::LDAP::Filter
|
|
||||||
##
|
|
||||||
# Known filter types.
|
|
||||||
FilterTypes = [ :ne, :eq, :ge, :le, :and, :or, :not, :ex ]
|
|
||||||
|
|
||||||
def initialize(op, left, right) #:nodoc:
|
|
||||||
unless FilterTypes.include?(op)
|
|
||||||
raise Net::LDAP::LdapError, "Invalid or unsupported operator #{op.inspect} in LDAP Filter."
|
|
||||||
end
|
|
||||||
@op = op
|
|
||||||
@left = left
|
|
||||||
@right = right
|
|
||||||
end
|
|
||||||
|
|
||||||
class << self
|
|
||||||
# We don't want filters created except using our custom constructors.
|
|
||||||
private :new
|
|
||||||
|
|
||||||
##
|
|
||||||
# Creates a Filter object indicating that the value of a particular
|
|
||||||
# attribute must either be present or match a particular string.
|
|
||||||
#
|
|
||||||
# Specifying that an attribute is 'present' means only directory entries
|
|
||||||
# which contain a value for the particular attribute will be selected by
|
|
||||||
# the filter. This is useful in case of optional attributes such as
|
|
||||||
# <tt>mail.</tt> Presence is indicated by giving the value "*" in the
|
|
||||||
# second parameter to #eq. This example selects only entries that have
|
|
||||||
# one or more values for <tt>sAMAccountName:</tt>
|
|
||||||
#
|
|
||||||
# f = Net::LDAP::Filter.eq("sAMAccountName", "*")
|
|
||||||
#
|
|
||||||
# To match a particular range of values, pass a string as the second
|
|
||||||
# parameter to #eq. The string may contain one or more "*" characters as
|
|
||||||
# wildcards: these match zero or more occurrences of any character. Full
|
|
||||||
# regular-expressions are <i>not</i> supported due to limitations in the
|
|
||||||
# underlying LDAP protocol. This example selects any entry with a
|
|
||||||
# <tt>mail</tt> value containing the substring "anderson":
|
|
||||||
#
|
|
||||||
# f = Net::LDAP::Filter.eq("mail", "*anderson*")
|
|
||||||
#
|
|
||||||
# This filter does not perform any escaping
|
|
||||||
def eq(attribute, value)
|
|
||||||
new(:eq, attribute, value)
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Creates a Filter object indicating extensible comparison. This Filter
|
|
||||||
# object is currently considered EXPERIMENTAL.
|
|
||||||
#
|
|
||||||
# sample_attributes = ['cn:fr', 'cn:fr.eq',
|
|
||||||
# 'cn:1.3.6.1.4.1.42.2.27.9.4.49.1.3', 'cn:dn:fr', 'cn:dn:fr.eq']
|
|
||||||
# attr = sample_attributes.first # Pick an extensible attribute
|
|
||||||
# value = 'roberts'
|
|
||||||
#
|
|
||||||
# filter = "#{attr}:=#{value}" # Basic String Filter
|
|
||||||
# filter = Net::LDAP::Filter.ex(attr, value) # Net::LDAP::Filter
|
|
||||||
#
|
|
||||||
# # Perform a search with the Extensible Match Filter
|
|
||||||
# Net::LDAP.search(:filter => filter)
|
|
||||||
#--
|
|
||||||
# The LDIF required to support the above examples on the OpenDS LDAP
|
|
||||||
# server:
|
|
||||||
#
|
|
||||||
# version: 1
|
|
||||||
#
|
|
||||||
# dn: dc=example,dc=com
|
|
||||||
# objectClass: domain
|
|
||||||
# objectClass: top
|
|
||||||
# dc: example
|
|
||||||
#
|
|
||||||
# dn: ou=People,dc=example,dc=com
|
|
||||||
# objectClass: organizationalUnit
|
|
||||||
# objectClass: top
|
|
||||||
# ou: People
|
|
||||||
#
|
|
||||||
# dn: uid=1,ou=People,dc=example,dc=com
|
|
||||||
# objectClass: person
|
|
||||||
# objectClass: organizationalPerson
|
|
||||||
# objectClass: inetOrgPerson
|
|
||||||
# objectClass: top
|
|
||||||
# cn:: csO0YsOpcnRz
|
|
||||||
# sn:: YsO0YiByw7Riw6lydHM=
|
|
||||||
# givenName:: YsO0Yg==
|
|
||||||
# uid: 1
|
|
||||||
#
|
|
||||||
# =Refs:
|
|
||||||
# * http://www.ietf.org/rfc/rfc2251.txt
|
|
||||||
# * http://www.novell.com/documentation/edir88/edir88/?page=/documentation/edir88/edir88/data/agazepd.html
|
|
||||||
# * https://docs.opends.org/2.0/page/SearchingUsingInternationalCollationRules
|
|
||||||
#++
|
|
||||||
def ex(attribute, value)
|
|
||||||
new(:ex, attribute, value)
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Creates a Filter object indicating that a particular attribute value
|
|
||||||
# is either not present or does not match a particular string; see
|
|
||||||
# Filter::eq for more information.
|
|
||||||
#
|
|
||||||
# This filter does not perform any escaping
|
|
||||||
def ne(attribute, value)
|
|
||||||
new(:ne, attribute, value)
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Creates a Filter object indicating that the value of a particular
|
|
||||||
# attribute must match a particular string. The attribute value is
|
|
||||||
# escaped, so the "*" character is interpreted literally.
|
|
||||||
def equals(attribute, value)
|
|
||||||
new(:eq, attribute, escape(value))
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Creates a Filter object indicating that the value of a particular
|
|
||||||
# attribute must begin with a particular string. The attribute value is
|
|
||||||
# escaped, so the "*" character is interpreted literally.
|
|
||||||
def begins(attribute, value)
|
|
||||||
new(:eq, attribute, escape(value) + "*")
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Creates a Filter object indicating that the value of a particular
|
|
||||||
# attribute must end with a particular string. The attribute value is
|
|
||||||
# escaped, so the "*" character is interpreted literally.
|
|
||||||
def ends(attribute, value)
|
|
||||||
new(:eq, attribute, "*" + escape(value))
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Creates a Filter object indicating that the value of a particular
|
|
||||||
# attribute must contain a particular string. The attribute value is
|
|
||||||
# escaped, so the "*" character is interpreted literally.
|
|
||||||
def contains(attribute, value)
|
|
||||||
new(:eq, attribute, "*" + escape(value) + "*")
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Creates a Filter object indicating that a particular attribute value
|
|
||||||
# is greater than or equal to the specified value.
|
|
||||||
def ge(attribute, value)
|
|
||||||
new(:ge, attribute, value)
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Creates a Filter object indicating that a particular attribute value
|
|
||||||
# is less than or equal to the specified value.
|
|
||||||
def le(attribute, value)
|
|
||||||
new(:le, attribute, value)
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Joins two or more filters so that all conditions must be true. Calling
|
|
||||||
# <tt>Filter.join(left, right)</tt> is the same as <tt>left &
|
|
||||||
# right</tt>.
|
|
||||||
#
|
|
||||||
# # Selects only entries that have an <tt>objectclass</tt> attribute.
|
|
||||||
# x = Net::LDAP::Filter.present("objectclass")
|
|
||||||
# # Selects only entries that have a <tt>mail</tt> attribute that begins
|
|
||||||
# # with "George".
|
|
||||||
# y = Net::LDAP::Filter.eq("mail", "George*")
|
|
||||||
# # Selects only entries that meet both conditions above.
|
|
||||||
# z = Net::LDAP::Filter.join(x, y)
|
|
||||||
def join(left, right)
|
|
||||||
new(:and, left, right)
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Creates a disjoint comparison between two or more filters. Selects
|
|
||||||
# entries where either the left or right side are true. Calling
|
|
||||||
# <tt>Filter.intersect(left, right)</tt> is the same as <tt>left |
|
|
||||||
# right</tt>.
|
|
||||||
#
|
|
||||||
# # Selects only entries that have an <tt>objectclass</tt> attribute.
|
|
||||||
# x = Net::LDAP::Filter.present("objectclass")
|
|
||||||
# # Selects only entries that have a <tt>mail</tt> attribute that begins
|
|
||||||
# # with "George".
|
|
||||||
# y = Net::LDAP::Filter.eq("mail", "George*")
|
|
||||||
# # Selects only entries that meet either condition above.
|
|
||||||
# z = x | y
|
|
||||||
def intersect(left, right)
|
|
||||||
new(:or, left, right)
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Negates a filter. Calling <tt>Fitler.negate(filter)</tt> i s the same
|
|
||||||
# as <tt>~filter</tt>.
|
|
||||||
#
|
|
||||||
# # Selects only entries that do not have an <tt>objectclass</tt>
|
|
||||||
# # attribute.
|
|
||||||
# x = ~Net::LDAP::Filter.present("objectclass")
|
|
||||||
def negate(filter)
|
|
||||||
new(:not, filter, nil)
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# This is a synonym for #eq(attribute, "*"). Also known as #present and
|
|
||||||
# #pres.
|
|
||||||
def present?(attribute)
|
|
||||||
eq(attribute, "*")
|
|
||||||
end
|
|
||||||
alias_method :present, :present?
|
|
||||||
alias_method :pres, :present?
|
|
||||||
|
|
||||||
# http://tools.ietf.org/html/rfc4515 lists these exceptions from UTF1
|
|
||||||
# charset for filters. All of the following must be escaped in any normal
|
|
||||||
# string using a single backslash ('\') as escape.
|
|
||||||
#
|
|
||||||
ESCAPES = {
|
|
||||||
"\0" => '00', # NUL = %x00 ; null character
|
|
||||||
'*' => '2A', # ASTERISK = %x2A ; asterisk ("*")
|
|
||||||
'(' => '28', # LPARENS = %x28 ; left parenthesis ("(")
|
|
||||||
')' => '29', # RPARENS = %x29 ; right parenthesis (")")
|
|
||||||
'\\' => '5C', # ESC = %x5C ; esc (or backslash) ("\")
|
|
||||||
}
|
|
||||||
# Compiled character class regexp using the keys from the above hash.
|
|
||||||
ESCAPE_RE = Regexp.new(
|
|
||||||
"[" +
|
|
||||||
ESCAPES.keys.map { |e| Regexp.escape(e) }.join +
|
|
||||||
"]")
|
|
||||||
|
|
||||||
##
|
|
||||||
# Escape a string for use in an LDAP filter
|
|
||||||
def escape(string)
|
|
||||||
string.gsub(ESCAPE_RE) { |char| "\\" + ESCAPES[char] }
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Converts an LDAP search filter in BER format to an Net::LDAP::Filter
|
|
||||||
# object. The incoming BER object most likely came to us by parsing an
|
|
||||||
# LDAP searchRequest PDU. See also the comments under #to_ber, including
|
|
||||||
# the grammar snippet from the RFC.
|
|
||||||
#--
|
|
||||||
# We're hardcoding the BER constants from the RFC. These should be
|
|
||||||
# broken out insto constants.
|
|
||||||
def parse_ber(ber)
|
|
||||||
case ber.ber_identifier
|
|
||||||
when 0xa0 # context-specific constructed 0, "and"
|
|
||||||
ber.map { |b| parse_ber(b) }.inject { |memo, obj| memo & obj }
|
|
||||||
when 0xa1 # context-specific constructed 1, "or"
|
|
||||||
ber.map { |b| parse_ber(b) }.inject { |memo, obj| memo | obj }
|
|
||||||
when 0xa2 # context-specific constructed 2, "not"
|
|
||||||
~parse_ber(ber.first)
|
|
||||||
when 0xa3 # context-specific constructed 3, "equalityMatch"
|
|
||||||
if ber.last == "*"
|
|
||||||
else
|
|
||||||
eq(ber.first, ber.last)
|
|
||||||
end
|
|
||||||
when 0xa4 # context-specific constructed 4, "substring"
|
|
||||||
str = ""
|
|
||||||
final = false
|
|
||||||
ber.last.each { |b|
|
|
||||||
case b.ber_identifier
|
|
||||||
when 0x80 # context-specific primitive 0, SubstringFilter "initial"
|
|
||||||
raise Net::LDAP::LdapError, "Unrecognized substring filter; bad initial value." if str.length > 0
|
|
||||||
str += b
|
|
||||||
when 0x81 # context-specific primitive 0, SubstringFilter "any"
|
|
||||||
str += "*#{b}"
|
|
||||||
when 0x82 # context-specific primitive 0, SubstringFilter "final"
|
|
||||||
str += "*#{b}"
|
|
||||||
final = true
|
|
||||||
end
|
|
||||||
}
|
|
||||||
str += "*" unless final
|
|
||||||
eq(ber.first.to_s, str)
|
|
||||||
when 0xa5 # context-specific constructed 5, "greaterOrEqual"
|
|
||||||
ge(ber.first.to_s, ber.last.to_s)
|
|
||||||
when 0xa6 # context-specific constructed 6, "lessOrEqual"
|
|
||||||
le(ber.first.to_s, ber.last.to_s)
|
|
||||||
when 0x87 # context-specific primitive 7, "present"
|
|
||||||
# call to_s to get rid of the BER-identifiedness of the incoming string.
|
|
||||||
present?(ber.to_s)
|
|
||||||
when 0xa9 # context-specific constructed 9, "extensible comparison"
|
|
||||||
raise Net::LDAP::LdapError, "Invalid extensible search filter, should be at least two elements" if ber.size<2
|
|
||||||
|
|
||||||
# Reassembles the extensible filter parts
|
|
||||||
# (["sn", "2.4.6.8.10", "Barbara Jones", '1'])
|
|
||||||
type = value = dn = rule = nil
|
|
||||||
ber.each do |element|
|
|
||||||
case element.ber_identifier
|
|
||||||
when 0x81 then rule=element
|
|
||||||
when 0x82 then type=element
|
|
||||||
when 0x83 then value=element
|
|
||||||
when 0x84 then dn='dn'
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
attribute = ''
|
|
||||||
attribute << type if type
|
|
||||||
attribute << ":#{dn}" if dn
|
|
||||||
attribute << ":#{rule}" if rule
|
|
||||||
|
|
||||||
ex(attribute, value)
|
|
||||||
else
|
|
||||||
raise Net::LDAP::LdapError, "Invalid BER tag-value (#{ber.ber_identifier}) in search filter."
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Converts an LDAP filter-string (in the prefix syntax specified in RFC-2254)
|
|
||||||
# to a Net::LDAP::Filter.
|
|
||||||
def construct(ldap_filter_string)
|
|
||||||
FilterParser.parse(ldap_filter_string)
|
|
||||||
end
|
|
||||||
alias_method :from_rfc2254, :construct
|
|
||||||
alias_method :from_rfc4515, :construct
|
|
||||||
|
|
||||||
##
|
|
||||||
# Convert an RFC-1777 LDAP/BER "Filter" object to a Net::LDAP::Filter
|
|
||||||
# object.
|
|
||||||
#--
|
|
||||||
# TODO, we're hardcoding the RFC-1777 BER-encodings of the various
|
|
||||||
# filter types. Could pull them out into a constant.
|
|
||||||
#++
|
|
||||||
def parse_ldap_filter(obj)
|
|
||||||
case obj.ber_identifier
|
|
||||||
when 0x87 # present. context-specific primitive 7.
|
|
||||||
eq(obj.to_s, "*")
|
|
||||||
when 0xa3 # equalityMatch. context-specific constructed 3.
|
|
||||||
eq(obj[0], obj[1])
|
|
||||||
else
|
|
||||||
raise Net::LDAP::LdapError, "Unknown LDAP search-filter type: #{obj.ber_identifier}"
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Joins two or more filters so that all conditions must be true.
|
|
||||||
#
|
|
||||||
# # Selects only entries that have an <tt>objectclass</tt> attribute.
|
|
||||||
# x = Net::LDAP::Filter.present("objectclass")
|
|
||||||
# # Selects only entries that have a <tt>mail</tt> attribute that begins
|
|
||||||
# # with "George".
|
|
||||||
# y = Net::LDAP::Filter.eq("mail", "George*")
|
|
||||||
# # Selects only entries that meet both conditions above.
|
|
||||||
# z = x & y
|
|
||||||
def &(filter)
|
|
||||||
self.class.join(self, filter)
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Creates a disjoint comparison between two or more filters. Selects
|
|
||||||
# entries where either the left or right side are true.
|
|
||||||
#
|
|
||||||
# # Selects only entries that have an <tt>objectclass</tt> attribute.
|
|
||||||
# x = Net::LDAP::Filter.present("objectclass")
|
|
||||||
# # Selects only entries that have a <tt>mail</tt> attribute that begins
|
|
||||||
# # with "George".
|
|
||||||
# y = Net::LDAP::Filter.eq("mail", "George*")
|
|
||||||
# # Selects only entries that meet either condition above.
|
|
||||||
# z = x | y
|
|
||||||
def |(filter)
|
|
||||||
self.class.intersect(self, filter)
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Negates a filter.
|
|
||||||
#
|
|
||||||
# # Selects only entries that do not have an <tt>objectclass</tt>
|
|
||||||
# # attribute.
|
|
||||||
# x = ~Net::LDAP::Filter.present("objectclass")
|
|
||||||
def ~@
|
|
||||||
self.class.negate(self)
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Equality operator for filters, useful primarily for constructing unit tests.
|
|
||||||
def ==(filter)
|
|
||||||
# 20100320 AZ: We need to come up with a better way of doing this. This
|
|
||||||
# is just nasty.
|
|
||||||
str = "[@op,@left,@right]"
|
|
||||||
self.instance_eval(str) == filter.instance_eval(str)
|
|
||||||
end
|
|
||||||
|
|
||||||
def to_raw_rfc2254
|
|
||||||
case @op
|
|
||||||
when :ne
|
|
||||||
"!(#{@left}=#{@right})"
|
|
||||||
when :eq
|
|
||||||
"#{@left}=#{@right}"
|
|
||||||
when :ex
|
|
||||||
"#{@left}:=#{@right}"
|
|
||||||
when :ge
|
|
||||||
"#{@left}>=#{@right}"
|
|
||||||
when :le
|
|
||||||
"#{@left}<=#{@right}"
|
|
||||||
when :and
|
|
||||||
"&(#{@left.to_raw_rfc2254})(#{@right.to_raw_rfc2254})"
|
|
||||||
when :or
|
|
||||||
"|(#{@left.to_raw_rfc2254})(#{@right.to_raw_rfc2254})"
|
|
||||||
when :not
|
|
||||||
"!(#{@left.to_raw_rfc2254})"
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Converts the Filter object to an RFC 2254-compatible text format.
|
|
||||||
def to_rfc2254
|
|
||||||
"(#{to_raw_rfc2254})"
|
|
||||||
end
|
|
||||||
|
|
||||||
def to_s
|
|
||||||
to_rfc2254
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Converts the filter to BER format.
|
|
||||||
#--
|
|
||||||
# Filter ::=
|
|
||||||
# CHOICE {
|
|
||||||
# and [0] SET OF Filter,
|
|
||||||
# or [1] SET OF Filter,
|
|
||||||
# not [2] Filter,
|
|
||||||
# equalityMatch [3] AttributeValueAssertion,
|
|
||||||
# substrings [4] SubstringFilter,
|
|
||||||
# greaterOrEqual [5] AttributeValueAssertion,
|
|
||||||
# lessOrEqual [6] AttributeValueAssertion,
|
|
||||||
# present [7] AttributeType,
|
|
||||||
# approxMatch [8] AttributeValueAssertion,
|
|
||||||
# extensibleMatch [9] MatchingRuleAssertion
|
|
||||||
# }
|
|
||||||
#
|
|
||||||
# SubstringFilter ::=
|
|
||||||
# SEQUENCE {
|
|
||||||
# type AttributeType,
|
|
||||||
# SEQUENCE OF CHOICE {
|
|
||||||
# initial [0] LDAPString,
|
|
||||||
# any [1] LDAPString,
|
|
||||||
# final [2] LDAPString
|
|
||||||
# }
|
|
||||||
# }
|
|
||||||
#
|
|
||||||
# MatchingRuleAssertion ::=
|
|
||||||
# SEQUENCE {
|
|
||||||
# matchingRule [1] MatchingRuleId OPTIONAL,
|
|
||||||
# type [2] AttributeDescription OPTIONAL,
|
|
||||||
# matchValue [3] AssertionValue,
|
|
||||||
# dnAttributes [4] BOOLEAN DEFAULT FALSE
|
|
||||||
# }
|
|
||||||
#
|
|
||||||
# Matching Rule Suffixes
|
|
||||||
# Less than [.1] or .[lt]
|
|
||||||
# Less than or equal to [.2] or [.lte]
|
|
||||||
# Equality [.3] or [.eq] (default)
|
|
||||||
# Greater than or equal to [.4] or [.gte]
|
|
||||||
# Greater than [.5] or [.gt]
|
|
||||||
# Substring [.6] or [.sub]
|
|
||||||
#
|
|
||||||
#++
|
|
||||||
def to_ber
|
|
||||||
case @op
|
|
||||||
when :eq
|
|
||||||
if @right == "*" # presence test
|
|
||||||
@left.to_s.to_ber_contextspecific(7)
|
|
||||||
elsif @right =~ /[*]/ # substring
|
|
||||||
# Parsing substrings is a little tricky. We use String#split to
|
|
||||||
# break a string into substrings delimited by the * (star)
|
|
||||||
# character. But we also need to know whether there is a star at the
|
|
||||||
# head and tail of the string, so we use a limit parameter value of
|
|
||||||
# -1: "If negative, there is no limit to the number of fields
|
|
||||||
# returned, and trailing null fields are not suppressed."
|
|
||||||
#
|
|
||||||
# 20100320 AZ: This is much simpler than the previous verison. Also,
|
|
||||||
# unnecessary regex escaping has been removed.
|
|
||||||
|
|
||||||
ary = @right.split(/[*]+/, -1)
|
|
||||||
|
|
||||||
if ary.first.empty?
|
|
||||||
first = nil
|
|
||||||
ary.shift
|
|
||||||
else
|
|
||||||
first = ary.shift.to_ber_contextspecific(0)
|
|
||||||
end
|
|
||||||
|
|
||||||
if ary.last.empty?
|
|
||||||
last = nil
|
|
||||||
ary.pop
|
|
||||||
else
|
|
||||||
last = ary.pop.to_ber_contextspecific(2)
|
|
||||||
end
|
|
||||||
|
|
||||||
seq = ary.map { |e| e.to_ber_contextspecific(1) }
|
|
||||||
seq.unshift first if first
|
|
||||||
seq.push last if last
|
|
||||||
|
|
||||||
[@left.to_s.to_ber, seq.to_ber].to_ber_contextspecific(4)
|
|
||||||
else # equality
|
|
||||||
[@left.to_s.to_ber, unescape(@right).to_ber].to_ber_contextspecific(3)
|
|
||||||
end
|
|
||||||
when :ex
|
|
||||||
seq = []
|
|
||||||
|
|
||||||
unless @left =~ /^([-;\w]*)(:dn)?(:(\w+|[.\w]+))?$/
|
|
||||||
raise Net::LDAP::LdapError, "Bad attribute #{@left}"
|
|
||||||
end
|
|
||||||
type, dn, rule = $1, $2, $4
|
|
||||||
|
|
||||||
seq << rule.to_ber_contextspecific(1) unless rule.to_s.empty? # matchingRule
|
|
||||||
seq << type.to_ber_contextspecific(2) unless type.to_s.empty? # type
|
|
||||||
seq << unescape(@right).to_ber_contextspecific(3) # matchingValue
|
|
||||||
seq << "1".to_ber_contextspecific(4) unless dn.to_s.empty? # dnAttributes
|
|
||||||
|
|
||||||
seq.to_ber_contextspecific(9)
|
|
||||||
when :ge
|
|
||||||
[@left.to_s.to_ber, unescape(@right).to_ber].to_ber_contextspecific(5)
|
|
||||||
when :le
|
|
||||||
[@left.to_s.to_ber, unescape(@right).to_ber].to_ber_contextspecific(6)
|
|
||||||
when :ne
|
|
||||||
[self.class.eq(@left, @right).to_ber].to_ber_contextspecific(2)
|
|
||||||
when :and
|
|
||||||
ary = [@left.coalesce(:and), @right.coalesce(:and)].flatten
|
|
||||||
ary.map {|a| a.to_ber}.to_ber_contextspecific(0)
|
|
||||||
when :or
|
|
||||||
ary = [@left.coalesce(:or), @right.coalesce(:or)].flatten
|
|
||||||
ary.map {|a| a.to_ber}.to_ber_contextspecific(1)
|
|
||||||
when :not
|
|
||||||
[@left.to_ber].to_ber_contextspecific(2)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Perform filter operations against a user-supplied block. This is useful
|
|
||||||
# when implementing an LDAP directory server. The caller's block will be
|
|
||||||
# called with two arguments: first, a symbol denoting the "operation" of
|
|
||||||
# the filter; and second, an array consisting of arguments to the
|
|
||||||
# operation. The user-supplied block (which is MANDATORY) should perform
|
|
||||||
# some desired application-defined processing, and may return a
|
|
||||||
# locally-meaningful object that will appear as a parameter in the :and,
|
|
||||||
# :or and :not operations detailed below.
|
|
||||||
#
|
|
||||||
# A typical object to return from the user-supplied block is an array of
|
|
||||||
# Net::LDAP::Filter objects.
|
|
||||||
#
|
|
||||||
# These are the possible values that may be passed to the user-supplied
|
|
||||||
# block:
|
|
||||||
# * :equalityMatch (the arguments will be an attribute name and a value
|
|
||||||
# to be matched);
|
|
||||||
# * :substrings (two arguments: an attribute name and a value containing
|
|
||||||
# one or more "*" characters);
|
|
||||||
# * :present (one argument: an attribute name);
|
|
||||||
# * :greaterOrEqual (two arguments: an attribute name and a value to be
|
|
||||||
# compared against);
|
|
||||||
# * :lessOrEqual (two arguments: an attribute name and a value to be
|
|
||||||
# compared against);
|
|
||||||
# * :and (two or more arguments, each of which is an object returned
|
|
||||||
# from a recursive call to #execute, with the same block;
|
|
||||||
# * :or (two or more arguments, each of which is an object returned from
|
|
||||||
# a recursive call to #execute, with the same block; and
|
|
||||||
# * :not (one argument, which is an object returned from a recursive
|
|
||||||
# call to #execute with the the same block.
|
|
||||||
def execute(&block)
|
|
||||||
case @op
|
|
||||||
when :eq
|
|
||||||
if @right == "*"
|
|
||||||
yield :present, @left
|
|
||||||
elsif @right.index '*'
|
|
||||||
yield :substrings, @left, @right
|
|
||||||
else
|
|
||||||
yield :equalityMatch, @left, @right
|
|
||||||
end
|
|
||||||
when :ge
|
|
||||||
yield :greaterOrEqual, @left, @right
|
|
||||||
when :le
|
|
||||||
yield :lessOrEqual, @left, @right
|
|
||||||
when :or, :and
|
|
||||||
yield @op, (@left.execute(&block)), (@right.execute(&block))
|
|
||||||
when :not
|
|
||||||
yield @op, (@left.execute(&block))
|
|
||||||
end || []
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# This is a private helper method for dealing with chains of ANDs and ORs
|
|
||||||
# that are longer than two. If BOTH of our branches are of the specified
|
|
||||||
# type of joining operator, then return both of them as an array (calling
|
|
||||||
# coalesce recursively). If they're not, then return an array consisting
|
|
||||||
# only of self.
|
|
||||||
def coalesce(operator) #:nodoc:
|
|
||||||
if @op == operator
|
|
||||||
[@left.coalesce(operator), @right.coalesce(operator)]
|
|
||||||
else
|
|
||||||
[self]
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
#--
|
|
||||||
# We got a hash of attribute values.
|
|
||||||
# Do we match the attributes?
|
|
||||||
# Return T/F, and call match recursively as necessary.
|
|
||||||
#++
|
|
||||||
def match(entry)
|
|
||||||
case @op
|
|
||||||
when :eq
|
|
||||||
if @right == "*"
|
|
||||||
l = entry[@left] and l.length > 0
|
|
||||||
else
|
|
||||||
l = entry[@left] and l = Array(l) and l.index(@right)
|
|
||||||
end
|
|
||||||
else
|
|
||||||
raise Net::LDAP::LdapError, "Unknown filter type in match: #{@op}"
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Converts escaped characters (e.g., "\\28") to unescaped characters
|
|
||||||
# ("(").
|
|
||||||
def unescape(right)
|
|
||||||
right.gsub(/\\([a-fA-F\d]{2})/) { [$1.hex].pack("U") }
|
|
||||||
end
|
|
||||||
private :unescape
|
|
||||||
|
|
||||||
##
|
|
||||||
# Parses RFC 2254-style string representations of LDAP filters into Filter
|
|
||||||
# object hierarchies.
|
|
||||||
class FilterParser #:nodoc:
|
|
||||||
##
|
|
||||||
# The constructed filter.
|
|
||||||
attr_reader :filter
|
|
||||||
|
|
||||||
class << self
|
|
||||||
private :new
|
|
||||||
|
|
||||||
##
|
|
||||||
# Construct a filter tree from the provided string and return it.
|
|
||||||
def parse(ldap_filter_string)
|
|
||||||
new(ldap_filter_string).filter
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
def initialize(str)
|
|
||||||
require 'strscan' # Don't load strscan until we need it.
|
|
||||||
@filter = parse(StringScanner.new(str))
|
|
||||||
raise Net::LDAP::LdapError, "Invalid filter syntax." unless @filter
|
|
||||||
end
|
|
||||||
|
|
||||||
##
|
|
||||||
# Parse the string contained in the StringScanner provided. Parsing
|
|
||||||
# tries to parse a standalone expression first. If that fails, it tries
|
|
||||||
# to parse a parenthesized expression.
|
|
||||||
def parse(scanner)
|
|
||||||
parse_filter_branch(scanner) or parse_paren_expression(scanner)
|
|
||||||
end
|
|
||||||
private :parse
|
|
||||||
|
|
||||||
##
|
|
||||||
# Join ("&") and intersect ("|") operations are presented in branches.
|
|
||||||
# That is, the expression <tt>(&(test1)(test2)</tt> has two branches:
|
|
||||||
# test1 and test2. Each of these is parsed separately and then pushed
|
|
||||||
# into a branch array for filter merging using the parent operation.
|
|
||||||
#
|
|
||||||
# This method parses the branch text out into an array of filter
|
|
||||||
# objects.
|
|
||||||
def parse_branches(scanner)
|
|
||||||
branches = []
|
|
||||||
while branch = parse_paren_expression(scanner)
|
|
||||||
branches << branch
|
|
||||||
end
|
|
||||||
branches
|
|
||||||
end
|
|
||||||
private :parse_branches
|
|
||||||
|
|
||||||
##
|
|
||||||
# Join ("&") and intersect ("|") operations are presented in branches.
|
|
||||||
# That is, the expression <tt>(&(test1)(test2)</tt> has two branches:
|
|
||||||
# test1 and test2. Each of these is parsed separately and then pushed
|
|
||||||
# into a branch array for filter merging using the parent operation.
|
|
||||||
#
|
|
||||||
# This method calls #parse_branches to generate the branch list and then
|
|
||||||
# merges them into a single Filter tree by calling the provided
|
|
||||||
# operation.
|
|
||||||
def merge_branches(op, scanner)
|
|
||||||
filter = nil
|
|
||||||
branches = parse_branches(scanner)
|
|
||||||
|
|
||||||
if branches.size >= 1
|
|
||||||
filter = branches.shift
|
|
||||||
while not branches.empty?
|
|
||||||
filter = filter.__send__(op, branches.shift)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
filter
|
|
||||||
end
|
|
||||||
private :merge_branches
|
|
||||||
|
|
||||||
def parse_paren_expression(scanner)
|
|
||||||
if scanner.scan(/\s*\(\s*/)
|
|
||||||
expr = if scanner.scan(/\s*\&\s*/)
|
|
||||||
merge_branches(:&, scanner)
|
|
||||||
elsif scanner.scan(/\s*\|\s*/)
|
|
||||||
merge_branches(:|, scanner)
|
|
||||||
elsif scanner.scan(/\s*\!\s*/)
|
|
||||||
br = parse_paren_expression(scanner)
|
|
||||||
~br if br
|
|
||||||
else
|
|
||||||
parse_filter_branch(scanner)
|
|
||||||
end
|
|
||||||
|
|
||||||
if expr and scanner.scan(/\s*\)\s*/)
|
|
||||||
expr
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
private :parse_paren_expression
|
|
||||||
|
|
||||||
##
|
|
||||||
# This parses a given expression inside of parentheses.
|
|
||||||
def parse_filter_branch(scanner)
|
|
||||||
scanner.scan(/\s*/)
|
|
||||||
if token = scanner.scan(/[-\w:.]*[\w]/)
|
|
||||||
scanner.scan(/\s*/)
|
|
||||||
if op = scanner.scan(/<=|>=|!=|:=|=/)
|
|
||||||
scanner.scan(/\s*/)
|
|
||||||
if value = scanner.scan(/(?:[-\w*.+@=,#\$%&!'\s]|\\[a-fA-F\d]{2})+/)
|
|
||||||
# 20100313 AZ: Assumes that "(uid=george*)" is the same as
|
|
||||||
# "(uid=george* )". The standard doesn't specify, but I can find
|
|
||||||
# no examples that suggest otherwise.
|
|
||||||
value.strip!
|
|
||||||
case op
|
|
||||||
when "="
|
|
||||||
Net::LDAP::Filter.eq(token, value)
|
|
||||||
when "!="
|
|
||||||
Net::LDAP::Filter.ne(token, value)
|
|
||||||
when "<="
|
|
||||||
Net::LDAP::Filter.le(token, value)
|
|
||||||
when ">="
|
|
||||||
Net::LDAP::Filter.ge(token, value)
|
|
||||||
when ":="
|
|
||||||
Net::LDAP::Filter.ex(token, value)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
private :parse_filter_branch
|
|
||||||
end # class Net::LDAP::FilterParser
|
|
||||||
end # class Net::LDAP::Filter
|
|
|
@ -1,31 +0,0 @@
|
||||||
# -*- ruby encoding: utf-8 -*-
|
|
||||||
require 'digest/sha1'
|
|
||||||
require 'digest/md5'
|
|
||||||
|
|
||||||
class Net::LDAP::Password
|
|
||||||
class << self
|
|
||||||
# Generate a password-hash suitable for inclusion in an LDAP attribute.
|
|
||||||
# Pass a hash type (currently supported: :md5 and :sha) and a plaintext
|
|
||||||
# password. This function will return a hashed representation.
|
|
||||||
#
|
|
||||||
#--
|
|
||||||
# STUB: This is here to fulfill the requirements of an RFC, which
|
|
||||||
# one?
|
|
||||||
#
|
|
||||||
# TODO, gotta do salted-sha and (maybe)salted-md5. Should we provide
|
|
||||||
# sha1 as a synonym for sha1? I vote no because then should you also
|
|
||||||
# provide ssha1 for symmetry?
|
|
||||||
def generate(type, str)
|
|
||||||
digest, digest_name = case type
|
|
||||||
when :md5
|
|
||||||
[Digest::MD5.new, 'MD5']
|
|
||||||
when :sha
|
|
||||||
[Digest::SHA1.new, 'SHA']
|
|
||||||
else
|
|
||||||
raise Net::LDAP::LdapError, "Unsupported password-hash type (#{type})"
|
|
||||||
end
|
|
||||||
digest << str.to_s
|
|
||||||
return "{#{digest_name}}#{[digest.digest].pack('m').chomp }"
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue