39 Commits

Author SHA1 Message Date
Evgeniy A. Dushistov
49c8094b53 version 0.5.5 2023-04-18 21:47:55 +03:00
Evgeniy A. Dushistov
4346e65bd3 fix CI build: ubuntu-18.04 not supported by github actions anymore 2023-04-18 21:44:18 +03:00
Evgeniy A. Dushistov
d144e0310c fix CI build 2023-01-16 16:44:09 +03:00
NiLuJe
6e36e7730c Warn on unknown dicts 2022-09-16 18:48:08 +03:00
NiLuJe
abe5e9e72f Check accesses to the bookname_to_ifo std::map
Avoid crashes when passing unknown dicts to the -u flag

Fix #87
2022-09-16 18:48:08 +03:00
NiLuJe
488ec68854 Use off_t for stuff mainly assigned to a stat.st_size value
Allows simplifying the mmap sanity checks in mapfile, and actually
ensuring they won't break when -D_FILE_OFFSET_BITS=64
2022-09-14 22:12:29 +03:00
Marcelino Alberdi Pereira
b698445ead Add a small summary of the project to the README 2022-09-07 17:51:13 +03:00
Evgeniy A. Dushistov
504e7807e6 add information about 0.5.4 into NEWS 2022-06-24 21:49:00 +03:00
Evgeniy A. Dushistov
6c80bf2d99 t_json: add data about new dictionary 2022-06-24 21:34:47 +03:00
Evgeniy A. Dushistov
8742575c33 fix bash syntax error 2022-06-24 21:34:47 +03:00
Evgeniy A. Dushistov
b294b76fb5 check file size before mapping on linux 2022-06-24 21:34:47 +03:00
Evgeniy A. Dushistov
823ec3d840 clang-format for mapfile 2022-06-24 21:34:47 +03:00
Evgeniy A. Dushistov
6ab8b51e6c version 0.5.4 2022-06-24 21:34:47 +03:00
Evgeniy A. Dushistov
881657b336 Revert "replace deprecated g_pattern_match_string function"
This reverts commit 452a4e07fb.
2022-06-24 21:34:47 +03:00
Evgeniy A. Dushistov
911fc2f561 more robust parsing of ifo file
fixes #79 fixes #81
2022-06-24 21:34:47 +03:00
Evgeniy A. Dushistov
f488f5350b stardict_lib.hpp: remove unused headers plus clang-format 2022-06-24 21:34:47 +03:00
Evgeniy A. Dushistov
e72220e748 use cmake to check if compiler supports c++11 2022-06-24 21:34:47 +03:00
Evgeniy A. Dushistov
b77c0e793a replace deprecated g_pattern_match_string function 2022-06-24 21:34:47 +03:00
Evgeniy A. Dushistov
ebaa6f2136 clang-format for stardict_lib.cpp 2022-06-24 21:34:47 +03:00
Aleksa Sarai
d054adb37c tests: add multiple results integration test
Make sure we return all of the relevant results, even in cases with
lots of results (larger than ENTR_PER_PAGE in the offset index) and
where you have a synyonym and headword present for the same word.

Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
2021-11-14 22:38:26 +03:00
Aleksa Sarai
4a9b1dae3d stardict_lib: remove dead poGet{Current,Next,Pre}Word iterators
They aren't used at all by scdv, and thus aren't tested (meaning that
adaptions to the core lookup algorithms can be complicated because these
methods use them but aren't tested so there's no real way of knowing if
a change has broken the methods or not).

Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
2021-11-14 22:38:26 +03:00
Aleksa Sarai
6d385221d0 lookup: return all matching entries found during lookup
Previously, we would just return the first entry we found that matched
the requested word. This causes issues with dictionaries that have lots
of entries which can be found using the same search string. In these
cases, the user got a completely arbitrary word returned to them rather
than the full set.

While this may seem strange, this is incredibly commonplace in Japanese
and likely several other languages. In Japanese:

 * When written using kanji, the same string of characters could refer
   to more than one word which may have a completely different meaning.
   Examples include 潜る (くぐる、もぐる) and 辛い (からい、つらい).

 * When written in kana, the same string of characters can also refer to
   more than one word which is written using completely different kanji,
   and has a completely different meaning. Examples include きく
   (聞く、効く、菊) and たつ (立つ、建つ、絶つ).

In both cases, these are different words in every sense of the word, and
have separate headwords for each in the dictionary. Thus in order to be
completely useful for such dictionaries, sdcv needs to be able to return
every matching word in the dictionary.

The solution is conceptually simple -- return a set containing the
indices rather than just a single index. Since every list we search is
sorted (to allow binary searching), once we find one match we can just
walk backwards and forwards from the match point to find the entire
block of matching terms and add them to the set in linear time. A
std::set is used so that we don't return duplicate results needlessly.

This solution was in practice a bit more complicated because .otf cache
files require a bit more fiddling, and also the ->lookup methods are
used by some callers to find the next entry if no entry was found. But
on the whole it's not too drastic of a change from the previous setup.

Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
2021-11-14 22:38:26 +03:00
Evgeniy Dushistov
3d15ce3b07 Merge pull request #77 from cyphar/multi-word-lookups
lookup: do not bail on first failed lookup with a word list
2021-10-17 21:03:14 +03:00
Aleksa Sarai
51338ac5bb lookup: do not bail on first failed lookup with a word list
Due to the lack of deinflection support in StarDict, users might want to
be able to create a list of possible deinflections and search each one
to see if there is a dictionary entry for that deinflection.

Being able to do this in one sdcv invocation is far more preferable to
calling sdcv once for each candidate due to the performance cost of
doing so. The most obvious language that would benefit from this is
Japanese, but I'm sure other folks would prefer this.

In order to make this use-case better supported -- try to look up every
word in the provided list of words before existing with an error if any
one of the words failed to be looked up.

Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
2021-09-29 03:28:44 +10:00
Evgeniy Dushistov
5ada75e08d Merge pull request #73 from 258204/json
Added --json (same as --json-output) to match man
2021-06-21 12:45:09 +03:00
258204
c7d9944f7d Added --json (same as --json-output) to match man 2021-06-19 19:19:31 -06:00
Evgeniy Dushistov
3963e358cd Merge pull request #68 from NiLuJe/glib-getopt
Handle "rest" arguments the glib way
2021-01-27 16:33:36 +03:00
NiLuJe
3b26731b02 Making glib thinks it's a filename instead of a string prevents the
initial UTF-8 conversion

At least on POSIX.

Windows is another kettle of fish. But then it was probably already
broken there.
2021-01-14 19:26:06 +01:00
NiLuJe
070a9fb0bd Oh, well, dirty hackery it is, then.
the previous approachonly works as long as locales are actually sane
(i.e., the test only passes if you *actually* have the ru_RU.KOI8-R
locale built, which the CI doesn't).
2021-01-12 04:37:07 +01:00
NiLuJe
8f096629ec Unbreak tests
glib already runs the argument through g_locale_to_utf8 with
G_OPTION_REMAINING
2021-01-12 04:16:03 +01:00
NiLuJe
25768c6b80 Handle "rest" arguments the glib way
Ensures the "stop parsing" token (--) is handled properly.
2021-01-12 03:35:55 +01:00
Evgeniy Dushistov
4ae4207349 Merge pull request #67 from doozan/master
Use binary search for synonyms, fixes #31
2020-12-23 04:30:13 +03:00
Jeff Doozan
994c1c7ae6 Use mapfile directly instead of buffer 2020-12-21 17:10:37 -05:00
Jeff Doozan
d38f8f13c9 Synonyms: Use MapFile 2020-12-21 08:53:29 -05:00
Jeff Doozan
cc7bcb8b73 Fix crash if dictionary has no synonyms 2020-12-19 18:37:15 -05:00
Jeff Doozan
8e9f72ae57 Synonyms lookup: return correct offset 2020-12-19 18:01:21 -05:00
Jeff Doozan
88af1a077c Use binary search for synonyms, fixes #31 2020-12-19 15:10:39 -05:00
Evgeniy Dushistov
b66799f358 Merge pull request #66 from Dushistov/fix-ci
fix ci: github changed API for path/env
2020-12-10 00:42:34 +03:00
Evgeniy A. Dushistov
be5c3a35bf fix ci: github changed API for path/env 2020-12-10 00:40:14 +03:00
23 changed files with 502 additions and 414 deletions

View File

@@ -15,7 +15,7 @@ BreakBeforeBinaryOperators: true
BreakBeforeTernaryOperators: true
BreakConstructorInitializersBeforeComma: true
BinPackParameters: true
ColumnLimit: 0
ColumnLimit: 120
ConstructorInitializerAllOnOneLineOrOnePerLine: false
DerivePointerAlignment: false
ExperimentalAutoDetectBinPacking: false

View File

@@ -20,12 +20,13 @@ jobs:
fail-fast: true
matrix:
os: [ubuntu-latest]
os: [ubuntu-20.04, ubuntu-latest]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
with:
submodules: 'recursive'
- uses: jwlawson/actions-setup-cmake@v1.0
- uses: jwlawson/actions-setup-cmake@v1.4
if: matrix.os != 'ubuntu-latest'
with:
cmake-version: '3.5.1'
github-api-token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -3,6 +3,10 @@ project(sdcv)
cmake_minimum_required(VERSION 3.5 FATAL_ERROR)
cmake_policy(VERSION 3.5)
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED True)
set(CMAKE_CXX_EXTENSIONS False)
include("${CMAKE_CURRENT_SOURCE_DIR}/cmake/compiler.cmake")
set(ZLIB_FIND_REQUIRED True)
@@ -91,7 +95,7 @@ set(CPACK_PACKAGE_VENDOR "Evgeniy Dushistov <dushistov@mail.ru>")
set(CPACK_PACKAGE_DESCRIPTION_FILE "${CMAKE_CURRENT_SOURCE_DIR}/README.org")
set(CPACK_PACKAGE_VERSION_MAJOR "0")
set(CPACK_PACKAGE_VERSION_MINOR "5")
set(CPACK_PACKAGE_VERSION_PATCH "3")
set(CPACK_PACKAGE_VERSION_PATCH "5")
set(sdcv_VERSION
"${CPACK_PACKAGE_VERSION_MAJOR}.${CPACK_PACKAGE_VERSION_MINOR}.${CPACK_PACKAGE_VERSION_PATCH}")
@@ -143,5 +147,7 @@ if (BUILD_TESTS)
add_sdcv_shell_test(t_utf8input)
add_sdcv_shell_test(t_datadir)
add_sdcv_shell_test(t_return_code)
add_sdcv_shell_test(t_multiple_results)
add_sdcv_shell_test(t_newlines_in_ifo)
endif (BUILD_TESTS)

10
NEWS
View File

@@ -1,3 +1,13 @@
Version 0.5.5
- Avoid crashes when passing unknown dicts to the -u flag (by NiLuJe)
- Use off_t for stuff mainly assigned to a stat.st_size value
Version 0.5.4
- Use binary search for synonyms
- Various improvments in work with synonyms
- Added --json (same as --json-output) to match man
- Show all matched result
- More robust parsing of ifo file
- Prevent crash if file size of files not matched expecting one for .oft files
Version 0.5.3
- Use single quotes around JSON data to reduce need for escaping
- Store integer magic in cache file

View File

@@ -1,6 +1,9 @@
#+OPTIONS: ^:nil
[[https://github.com/Dushistov/sdcv/actions?query=workflow%3ACI+branch%3Amaster][https://github.com/Dushistov/sdcv/workflows/CI/badge.svg]]
[[https://github.com/Dushistov/sdcv/blob/master/LICENSE][https://img.shields.io/badge/license-GPL%202-brightgreen.svg]]
* sdcv
*sdcv* is a simple, cross-platform, text-based utility for working with dictionaries in [[http://stardict-4.sourceforge.net/][StarDict]] format.
* How to compile and install
#+BEGIN_SRC sh
mkdir /tmp/build-sdcv

View File

@@ -16,19 +16,6 @@ if (NOT DEFINED SDCV_COMPILER_IS_GCC_COMPATIBLE)
endif()
endif()
if (MSVC AND (MSVC_VERSION LESS 1900))
message(FATAL_ERROR "MSVC version ${MSVC_VERSION} have no full c++11 support")
elseif (MSVC)
add_definitions(-DNOMINMAX)
elseif (NOT MSVC)
check_cxx_compiler_flag("-std=c++11" CXX_SUPPORTS_CXX11)
if (CXX_SUPPORTS_CXX11)
append("-std=c++11" CMAKE_CXX_FLAGS)
else ()
message(FATAL_ERROR "sdcv requires C++11 support but the '-std=c++11' flag isn't supported.")
endif()
endif ()
if (SDCV_COMPILER_IS_GCC_COMPATIBLE)
append("-Wall" "-Wextra" "-Wformat-security" "-Wcast-align" "-Werror=format" "-Wcast-qual" CMAKE_C_FLAGS)
append("-Wall" "-pedantic" "-Wextra" "-Wformat-security" "-Wcast-align" "-Werror=format" "-Wcast-qual" CMAKE_CXX_FLAGS)

View File

@@ -27,7 +27,7 @@ public:
private:
const char *start; /* start of mmap'd area */
const char *end; /* end of mmap'd area */
unsigned long size; /* size of mmap */
off_t size; /* size of mmap */
int type;
z_stream zStream;
@@ -47,7 +47,7 @@ private:
std::string origFilename;
std::string comment;
unsigned long crc;
unsigned long length;
off_t length;
unsigned long compressedLength;
DictCache cache[DICT_CACHE_SIZE];
MapFile mapfile;

View File

@@ -199,14 +199,18 @@ static std::string parse_data(const gchar *data, bool colorize_output)
void Library::SimpleLookup(const std::string &str, TSearchResultList &res_list)
{
glong ind;
std::set<glong> wordIdxs;
res_list.reserve(ndicts());
for (gint idict = 0; idict < ndicts(); ++idict)
if (SimpleLookupWord(str.c_str(), ind, idict))
for (gint idict = 0; idict < ndicts(); ++idict) {
wordIdxs.clear();
if (SimpleLookupWord(str.c_str(), wordIdxs, idict))
for (auto &wordIdx : wordIdxs)
res_list.push_back(
TSearchResult(dict_name(idict),
poGetWord(ind, idict),
parse_data(poGetWordData(ind, idict), colorize_output_)));
poGetWord(wordIdx, idict),
parse_data(poGetWordData(wordIdx, idict),
colorize_output_)));
}
}
void Library::LookupWithFuzzy(const std::string &str, TSearchResultList &res_list)

View File

@@ -7,6 +7,7 @@
#ifdef HAVE_MMAP
#include <fcntl.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <sys/types.h>
#endif
#ifdef _WIN32
@@ -21,13 +22,13 @@ public:
~MapFile();
MapFile(const MapFile &) = delete;
MapFile &operator=(const MapFile &) = delete;
bool open(const char *file_name, unsigned long file_size);
bool open(const char *file_name, off_t file_size);
gchar *begin() { return data; }
private:
char *data = nullptr;
unsigned long size = 0ul;
#ifdef HAVE_MMAP
size_t size = 0u;
int mmap_fd = -1;
#elif defined(_WIN32)
HANDLE hFile = 0;
@@ -35,25 +36,31 @@ private:
#endif
};
inline bool MapFile::open(const char *file_name, unsigned long file_size)
inline bool MapFile::open(const char *file_name, off_t file_size)
{
size = file_size;
#ifdef HAVE_MMAP
if ((mmap_fd = ::open(file_name, O_RDONLY)) < 0) {
// g_print("Open file %s failed!\n",fullfilename);
return false;
}
data = (gchar *)mmap(nullptr, file_size, PROT_READ, MAP_SHARED, mmap_fd, 0);
struct stat st;
if (fstat(mmap_fd, &st) == -1 || st.st_size < 0 || (st.st_size == 0 && S_ISREG(st.st_mode))
|| st.st_size != file_size) {
close(mmap_fd);
return false;
}
size = static_cast<size_t>(st.st_size);
data = (gchar *)mmap(nullptr, size, PROT_READ, MAP_SHARED, mmap_fd, 0);
if ((void *)data == (void *)(-1)) {
// g_print("mmap file %s failed!\n",idxfilename);
size = 0u;
data = nullptr;
return false;
}
#elif defined(_WIN32)
hFile = CreateFile(file_name, GENERIC_READ, 0, nullptr, OPEN_ALWAYS,
FILE_ATTRIBUTE_NORMAL, 0);
hFileMap = CreateFileMapping(hFile, nullptr, PAGE_READONLY, 0,
file_size, nullptr);
hFile = CreateFile(file_name, GENERIC_READ, 0, nullptr, OPEN_ALWAYS, FILE_ATTRIBUTE_NORMAL, 0);
hFileMap = CreateFileMapping(hFile, nullptr, PAGE_READONLY, 0, file_size, nullptr);
data = (gchar *)MapViewOfFile(hFileMap, FILE_MAP_READ, 0, 0, file_size);
#else
gsize read_len;

View File

@@ -83,6 +83,7 @@ try {
glib::CharStr opt_data_dir;
gboolean only_data_dir = FALSE;
gboolean colorize = FALSE;
glib::StrArr word_list;
const GOptionEntry entries[] = {
{ "version", 'v', 0, G_OPTION_ARG_NONE, &show_version,
@@ -96,6 +97,8 @@ try {
_("for use in scripts"), nullptr },
{ "json-output", 'j', 0, G_OPTION_ARG_NONE, &json_output,
_("print the result formatted as JSON"), nullptr },
{ "json", 'j', 0, G_OPTION_ARG_NONE, &json_output,
_("print the result formatted as JSON"), nullptr },
{ "exact-search", 'e', 0, G_OPTION_ARG_NONE, &no_fuzzy,
_("do not fuzzy-search for similar words, only return exact matches"), nullptr },
{ "utf8-output", '0', 0, G_OPTION_ARG_NONE, &utf8_output,
@@ -109,11 +112,13 @@ try {
_("only use the dictionaries in data-dir, do not search in user and system directories"), nullptr },
{ "color", 'c', 0, G_OPTION_ARG_NONE, &colorize,
_("colorize the output"), nullptr },
{ G_OPTION_REMAINING, 0, 0, G_OPTION_ARG_FILENAME_ARRAY, get_addr(word_list),
_("search terms"), _(" words") },
{},
};
glib::Error error;
GOptionContext *context = g_option_context_new(_(" words"));
GOptionContext *context = g_option_context_new(nullptr);
g_option_context_set_help_enabled(context, TRUE);
g_option_context_add_main_entries(context, entries, nullptr);
const gboolean parse_res = g_option_context_parse(context, &argc, &argv, get_addr(error));
@@ -181,10 +186,13 @@ try {
}
// add bookname to list
gchar **p = get_impl(use_dict_list);
while (*p) {
order_list.push_back(bookname_to_ifo.at(*p));
++p;
for (gchar **p = get_impl(use_dict_list); *p != nullptr; ++p) {
auto it = bookname_to_ifo.find(*p);
if (it != bookname_to_ifo.end()) {
order_list.push_back(it->second);
} else {
fprintf(stderr, _("Unknown dictionary: %s\n"), *p);
}
}
} else {
std::string ordering_cfg_file = std::string(g_get_user_config_dir()) + G_DIR_SEPARATOR_S "sdcv_ordering";
@@ -196,7 +204,12 @@ try {
if (ordering_file != nullptr) {
std::string line;
while (stdio_getline(ordering_file, line)) {
order_list.push_back(bookname_to_ifo.at(line));
auto it = bookname_to_ifo.find(line);
if (it != bookname_to_ifo.end()) {
order_list.push_back(it->second);
} else {
fprintf(stderr, _("Unknown dictionary: %s\n"), line.c_str());
}
}
fclose(ordering_file);
}
@@ -210,14 +223,19 @@ try {
lib.load(dicts_dir_list, order_list, disable_list);
std::unique_ptr<IReadLine> io(create_readline_object());
if (optind < argc) {
if (word_list != nullptr) {
search_result rval = SEARCH_SUCCESS;
for (int i = optind; i < argc; ++i)
if ((rval = lib.process_phrase(argv[i], *io, non_interactive)) != SEARCH_SUCCESS) {
return rval;
gchar **p = get_impl(word_list);
while (*p) {
search_result this_rval = lib.process_phrase(*p++, *io, non_interactive);
// If we encounter any error, save it but continue through the word
// list to check all requested words.
if (rval == SEARCH_SUCCESS)
rval = this_rval;
}
if (rval != SEARCH_SUCCESS)
return rval;
} else if (!non_interactive) {
std::string phrase;
while (io->read(_("Enter word or phrase: "), phrase)) {
if (lib.process_phrase(phrase.c_str(), *io) == SEARCH_FAILURE)

View File

@@ -5,6 +5,7 @@
#include <algorithm>
#include <cctype>
#include <cstring>
#include <map>
#include <stdexcept>
#include <glib/gstdio.h>
@@ -78,108 +79,93 @@ bool DictInfo::load_from_ifo_file(const std::string &ifofilename,
{
ifo_file_name = ifofilename;
glib::CharStr buffer;
if (!g_file_get_contents(ifofilename.c_str(), get_addr(buffer), nullptr, nullptr))
gsize length = 0;
if (!g_file_get_contents(ifofilename.c_str(), get_addr(buffer), &length, nullptr)) {
fprintf(stderr, "Can not read from %s\n", ifofilename.c_str());
return false;
}
static const char TREEDICT_MAGIC_DATA[] = "StarDict's treedict ifo file";
static const char DICT_MAGIC_DATA[] = "StarDict's dict ifo file";
const gchar *magic_data = istreedict ? TREEDICT_MAGIC_DATA : DICT_MAGIC_DATA;
static const unsigned char utf8_bom[] = { 0xEF, 0xBB, 0xBF, '\0' };
if (!g_str_has_prefix(
g_str_has_prefix(get_impl(buffer), (const gchar *)(utf8_bom)) ? get_impl(buffer) + 3 : get_impl(buffer),
magic_data)) {
static const gchar utf8_bom[] = { (gchar)0xEF, (gchar)0xBB, (gchar)0xBF, '\0' };
const gchar *p = get_impl(buffer);
const gchar *end = p + length;
if (g_str_has_prefix(p, utf8_bom)) {
p += strlen(utf8_bom);
}
if (!g_str_has_prefix(p, magic_data)) {
fprintf(stderr, "No magic header(%s) in ifo file\n", magic_data);
return false;
}
p += strlen(magic_data);
gchar *p1 = get_impl(buffer) + strlen(magic_data) - 1;
gchar *p2 = strstr(p1, "\nwordcount=");
if (p2 == nullptr)
std::map<std::string, std::string> key_value_map;
while (p != end) {
auto key_it = std::find_if(p, end, [](gchar ch) { return !g_ascii_isspace(ch); });
if (key_it == end) {
break;
}
auto eq_it = std::find(key_it, end, gchar('='));
if (eq_it == end) {
fprintf(stderr, "Invalid part of ifo (no '=') here: %s\n", key_it);
return false;
}
auto val_it = std::find_if(eq_it + 1, end, [](gchar ch) { return !g_ascii_isspace(ch); });
if (val_it == end) {
key_value_map.insert(std::make_pair(std::string(key_it, eq_it), std::string()));
break;
}
gchar *p3 = strchr(p2 + sizeof("\nwordcount=") - 1, '\n');
auto line_end_it = std::find_if(val_it, end, [](gchar ch) { return ch == '\r' || ch == '\n'; });
key_value_map.insert(std::make_pair(std::string(key_it, eq_it), std::string(val_it, line_end_it)));
if (line_end_it == end)
break;
p = line_end_it + 1;
}
wordcount = atol(std::string(p2 + sizeof("\nwordcount=") - 1, p3 - (p2 + sizeof("\nwordcount=") - 1)).c_str());
std::map<std::string, std::string>::const_iterator it;
#define FIND_KEY(_key_) \
it = key_value_map.find(_key_); \
if (it == key_value_map.end()) { \
fprintf(stderr, "Can not find '%s' in ifo file\n", _key_); \
return false; \
}
FIND_KEY("wordcount")
wordcount = atol(it->second.c_str());
if (istreedict) {
p2 = strstr(p1, "\ntdxfilesize=");
if (p2 == nullptr)
return false;
p3 = strchr(p2 + sizeof("\ntdxfilesize=") - 1, '\n');
index_file_size = atol(std::string(p2 + sizeof("\ntdxfilesize=") - 1, p3 - (p2 + sizeof("\ntdxfilesize=") - 1)).c_str());
FIND_KEY("tdxfilesize")
index_file_size = atol(it->second.c_str());
} else {
FIND_KEY("idxfilesize")
index_file_size = atol(it->second.c_str());
}
FIND_KEY("bookname")
bookname = it->second;
p2 = strstr(p1, "\nidxfilesize=");
if (p2 == nullptr)
return false;
p3 = strchr(p2 + sizeof("\nidxfilesize=") - 1, '\n');
index_file_size = atol(std::string(p2 + sizeof("\nidxfilesize=") - 1, p3 - (p2 + sizeof("\nidxfilesize=") - 1)).c_str());
#define SET_IF_EXISTS(_key_) \
it = key_value_map.find(#_key_); \
if (it != key_value_map.end()) { \
_key_ = it->second; \
}
p2 = strstr(p1, "\nbookname=");
if (p2 == nullptr)
return false;
p2 = p2 + sizeof("\nbookname=") - 1;
p3 = strchr(p2, '\n');
bookname.assign(p2, p3 - p2);
p2 = strstr(p1, "\nauthor=");
if (p2) {
p2 = p2 + sizeof("\nauthor=") - 1;
p3 = strchr(p2, '\n');
author.assign(p2, p3 - p2);
}
p2 = strstr(p1, "\nemail=");
if (p2) {
p2 = p2 + sizeof("\nemail=") - 1;
p3 = strchr(p2, '\n');
email.assign(p2, p3 - p2);
}
p2 = strstr(p1, "\nwebsite=");
if (p2) {
p2 = p2 + sizeof("\nwebsite=") - 1;
p3 = strchr(p2, '\n');
website.assign(p2, p3 - p2);
}
p2 = strstr(p1, "\ndate=");
if (p2) {
p2 = p2 + sizeof("\ndate=") - 1;
p3 = strchr(p2, '\n');
date.assign(p2, p3 - p2);
}
p2 = strstr(p1, "\ndescription=");
if (p2) {
p2 = p2 + sizeof("\ndescription=") - 1;
p3 = strchr(p2, '\n');
description.assign(p2, p3 - p2);
}
p2 = strstr(p1, "\nsametypesequence=");
if (p2) {
p2 += sizeof("\nsametypesequence=") - 1;
p3 = strchr(p2, '\n');
sametypesequence.assign(p2, p3 - p2);
}
p2 = strstr(p1, "\nsynwordcount=");
SET_IF_EXISTS(author)
SET_IF_EXISTS(email)
SET_IF_EXISTS(website)
SET_IF_EXISTS(date)
SET_IF_EXISTS(description)
SET_IF_EXISTS(sametypesequence)
syn_wordcount = 0;
if (p2) {
p2 += sizeof("\nsynwordcount=") - 1;
p3 = strchr(p2, '\n');
syn_wordcount = atol(std::string(p2, p3 - p2).c_str());
}
it = key_value_map.find("synwordcount");
if (it != key_value_map.end())
syn_wordcount = atol(it->second.c_str());
#undef FIND_KEY
#undef SET_IF_EXISTS
return true;
}
@@ -443,14 +429,14 @@ public:
if (idxfile)
fclose(idxfile);
}
bool load(const std::string &url, gulong wc, gulong fsize, bool verbose) override;
bool load(const std::string &url, gulong wc, off_t fsize, bool verbose) override;
const gchar *get_key(glong idx) override;
void get_data(glong idx) override { get_key(idx); }
const gchar *get_key_and_data(glong idx) override
{
return get_key(idx);
}
bool lookup(const char *str, glong &idx) override;
bool lookup(const char *str, std::set<glong> &idxs, glong &next_idx) override;
private:
static const gint ENTR_PER_PAGE = 32;
@@ -503,7 +489,7 @@ public:
{
}
~WordListIndex() { g_free(idxdatabuf); }
bool load(const std::string &url, gulong wc, gulong fsize, bool verbose) override;
bool load(const std::string &url, gulong wc, off_t fsize, bool verbose) override;
const gchar *get_key(glong idx) override { return wordlist[idx]; }
void get_data(glong idx) override;
const gchar *get_key_and_data(glong idx) override
@@ -511,7 +497,7 @@ public:
get_data(idx);
return get_key(idx);
}
bool lookup(const char *str, glong &idx) override;
bool lookup(const char *str, std::set<glong> &idxs, glong &next_idx) override;
private:
gchar *idxdatabuf;
@@ -629,7 +615,7 @@ bool OffsetIndex::save_cache(const std::string &url, bool verbose)
return false;
}
bool OffsetIndex::load(const std::string &url, gulong wc, gulong fsize, bool verbose)
bool OffsetIndex::load(const std::string &url, gulong wc, off_t fsize, bool verbose)
{
wordcount = wc;
gulong npages = (wc - 1) / ENTR_PER_PAGE + 2;
@@ -698,47 +684,52 @@ const gchar *OffsetIndex::get_key(glong idx)
return page.entries[idx_in_page].keystr;
}
bool OffsetIndex::lookup(const char *str, glong &idx)
bool OffsetIndex::lookup(const char *str, std::set<glong> &idxs, glong &next_idx)
{
bool bFound = false;
glong iFrom;
glong iTo = wordoffset.size() - 2;
gint cmpint;
glong iThisIndex;
if (stardict_strcmp(str, first.keystr.c_str()) < 0) {
idx = 0;
next_idx = 0;
return false;
} else if (stardict_strcmp(str, real_last.keystr.c_str()) > 0) {
idx = INVALID_INDEX;
next_idx = INVALID_INDEX;
return false;
}
// Search for the first page where the word is likely to be located.
glong iFrom = 0, iTo = wordoffset.size() - 2;
glong iPage = 0, iThisIndex = 0;
while (iFrom <= iTo) {
iThisIndex = (iFrom + iTo) / 2;
glong cmpint = stardict_strcmp(str, get_first_on_page_key(iThisIndex));
if (cmpint > 0)
iFrom = iThisIndex + 1;
else if (cmpint < 0)
iTo = iThisIndex - 1;
else {
bFound = true;
break;
}
}
if (bFound) {
// We can use this found index (even though it might not be the first)
// because we will search backwards later and catch any entries on
// previous pages.
iPage = iThisIndex;
iThisIndex = 0; // first item in the page
} else {
iPage = iTo; // prev
// Not found at the start of a page, so search within the page that
// should contain it. Binary search here is slightly overkill (we're
// searching at most ENTR_PER_PAGE = 32 elements) but this way next_idx
// is treated the same as other Lookup methods.
gulong netr = load_page(iPage);
iFrom = 0;
iThisIndex = 0;
while (iFrom <= iTo) {
iThisIndex = (iFrom + iTo) / 2;
cmpint = stardict_strcmp(str, get_first_on_page_key(iThisIndex));
if (cmpint > 0)
iFrom = iThisIndex + 1;
else if (cmpint < 0)
iTo = iThisIndex - 1;
else {
bFound = true;
break;
}
}
if (!bFound)
idx = iTo; //prev
else
idx = iThisIndex;
}
if (!bFound) {
gulong netr = load_page(idx);
iFrom = 1; // Needn't search the first word anymore.
iTo = netr - 1;
iThisIndex = 0;
while (iFrom <= iTo) {
iThisIndex = (iFrom + iTo) / 2;
cmpint = stardict_strcmp(str, page.entries[iThisIndex].keystr);
glong cmpint = stardict_strcmp(str, page.entries[iThisIndex].keystr);
if (cmpint > 0)
iFrom = iThisIndex + 1;
else if (cmpint < 0)
@@ -748,18 +739,26 @@ bool OffsetIndex::lookup(const char *str, glong &idx)
break;
}
}
idx *= ENTR_PER_PAGE;
}
if (!bFound)
idx += iFrom; //next
else
idx += iThisIndex;
} else {
idx *= ENTR_PER_PAGE;
next_idx = iPage * ENTR_PER_PAGE + iFrom; // next
else {
// Convert the found in-page index to the dict index.
iThisIndex = iPage * ENTR_PER_PAGE + iThisIndex;
// In order to return all idxs that match the search string, walk
// linearly behind and ahead of the found index.
glong iHeadIndex = iThisIndex - 1; // do not include iThisIndex
while (iHeadIndex >= 0 && stardict_strcmp(str, get_key(iHeadIndex)) == 0)
idxs.insert(iHeadIndex--);
do // no need to double-check iThisIndex -- we know it's a match already
idxs.insert(iThisIndex++);
while (iThisIndex <= real_last.idx && stardict_strcmp(str, get_key(iThisIndex)) == 0);
}
return bFound;
}
bool WordListIndex::load(const std::string &url, gulong wc, gulong fsize, bool)
bool WordListIndex::load(const std::string &url, gulong wc, off_t fsize, bool)
{
gzFile in = gzopen(url.c_str(), "rb");
if (in == nullptr)
@@ -772,7 +771,7 @@ bool WordListIndex::load(const std::string &url, gulong wc, gulong fsize, bool)
if (len < 0)
return false;
if (gulong(len) != fsize)
if (static_cast<off_t>(len) != fsize)
return false;
wordlist.resize(wc + 1);
@@ -795,18 +794,18 @@ void WordListIndex::get_data(glong idx)
wordentry_size = g_ntohl(get_uint32(p1));
}
bool WordListIndex::lookup(const char *str, glong &idx)
bool WordListIndex::lookup(const char *str, std::set<glong> &idxs, glong &next_idx)
{
bool bFound = false;
glong iTo = wordlist.size() - 2;
glong iLast = wordlist.size() - 2;
if (stardict_strcmp(str, get_key(0)) < 0) {
idx = 0;
} else if (stardict_strcmp(str, get_key(iTo)) > 0) {
idx = INVALID_INDEX;
next_idx = 0;
} else if (stardict_strcmp(str, get_key(iLast)) > 0) {
next_idx = INVALID_INDEX;
} else {
glong iThisIndex = 0;
glong iFrom = 0;
glong iFrom = 0, iTo = iLast;
gint cmpint;
while (iFrom <= iTo) {
iThisIndex = (iFrom + iTo) / 2;
@@ -821,9 +820,17 @@ bool WordListIndex::lookup(const char *str, glong &idx)
}
}
if (!bFound)
idx = iFrom; //next
else
idx = iThisIndex;
next_idx = iFrom; // next
else {
// In order to return all idxs that match the search string, walk
// linearly behind and ahead of the found index.
glong iHeadIndex = iThisIndex - 1; // do not include iThisIndex
while (iHeadIndex >= 0 && stardict_strcmp(str, get_key(iHeadIndex)) == 0)
idxs.insert(iHeadIndex--);
do // no need to double-check iThisIndex -- we know it's a match already
idxs.insert(iThisIndex++);
while (iThisIndex <= iLast && stardict_strcmp(str, get_key(iThisIndex)) == 0);
}
}
return bFound;
}
@@ -833,46 +840,87 @@ bool SynFile::load(const std::string &url, gulong wc)
{
struct stat stat_buf;
if (!stat(url.c_str(), &stat_buf)) {
MapFile syn;
if (!syn.open(url.c_str(), stat_buf.st_size))
if (!synfile.open(url.c_str(), stat_buf.st_size))
return false;
const gchar *current = syn.begin();
synlist.resize(wc + 1);
gchar *p1 = synfile.begin();
for (unsigned long i = 0; i < wc; i++) {
// each entry in a syn-file is:
// - 0-terminated string
// 4-byte index into .dict file in network byte order
glib::CharStr lower_string{ g_utf8_casefold(current, -1) };
std::string synonym{ get_impl(lower_string) };
current += synonym.length() + 1;
const guint32 idx = g_ntohl(get_uint32(current));
current += sizeof(idx);
synonyms[synonym] = idx;
synlist[i] = p1;
p1 += strlen(p1) + 1 + 4;
}
synlist[wc] = p1;
return true;
} else {
return false;
}
}
bool SynFile::lookup(const char *str, glong &idx)
bool SynFile::lookup(const char *str, std::set<glong> &idxs, glong &next_idx)
{
glib::CharStr lower_string{ g_utf8_casefold(str, -1) };
auto it = synonyms.find(get_impl(lower_string));
if (it != synonyms.end()) {
idx = it->second;
return true;
}
bool bFound = false;
glong iLast = synlist.size() - 2;
if (iLast < 0)
return false;
if (stardict_strcmp(str, get_key(0)) < 0) {
next_idx = 0;
} else if (stardict_strcmp(str, get_key(iLast)) > 0) {
next_idx = INVALID_INDEX;
} else {
glong iThisIndex = 0;
glong iFrom = 0, iTo = iLast;
gint cmpint;
while (iFrom <= iTo) {
iThisIndex = (iFrom + iTo) / 2;
cmpint = stardict_strcmp(str, get_key(iThisIndex));
if (cmpint > 0)
iFrom = iThisIndex + 1;
else if (cmpint < 0)
iTo = iThisIndex - 1;
else {
bFound = true;
break;
}
}
if (!bFound)
next_idx = iFrom; // next
else {
// In order to return all idxs that match the search string, walk
// linearly behind and ahead of the found index.
glong iHeadIndex = iThisIndex - 1; // do not include iThisIndex
while (iHeadIndex >= 0 && stardict_strcmp(str, get_key(iHeadIndex)) == 0) {
const gchar *key = get_key(iHeadIndex--);
idxs.insert(g_ntohl(get_uint32(key + strlen(key) + 1)));
}
do {
// no need to double-check iThisIndex -- we know it's a match already
const gchar *key = get_key(iThisIndex++);
idxs.insert(g_ntohl(get_uint32(key + strlen(key) + 1)));
} while (iThisIndex <= iLast && stardict_strcmp(str, get_key(iThisIndex)) == 0);
}
}
return bFound;
}
bool Dict::Lookup(const char *str, glong &idx)
bool Dict::Lookup(const char *str, std::set<glong> &idxs, glong &next_idx)
{
return syn_file->lookup(str, idx) || idx_file->lookup(str, idx);
bool found = false;
found |= syn_file->lookup(str, idxs, next_idx);
found |= idx_file->lookup(str, idxs, next_idx);
return found;
}
bool Dict::load(const std::string &ifofilename, bool verbose)
{
gulong idxfilesize;
off_t idxfilesize;
if (!load_ifofile(ifofilename, idxfilesize))
return false;
@@ -916,7 +964,7 @@ bool Dict::load(const std::string &ifofilename, bool verbose)
return true;
}
bool Dict::load_ifofile(const std::string &ifofilename, gulong &idxfilesize)
bool Dict::load_ifofile(const std::string &ifofilename, off_t &idxfilesize)
{
DictInfo dict_info;
if (!dict_info.load_from_ifo_file(ifofilename, false))
@@ -975,120 +1023,8 @@ void Libs::load(const std::list<std::string> &dicts_dirs,
});
}
const gchar *Libs::poGetCurrentWord(glong *iCurrent)
bool Libs::LookupSimilarWord(const gchar *sWord, std::set<glong> &iWordIndices, int iLib)
{
const gchar *poCurrentWord = nullptr;
const gchar *word;
for (std::vector<Dict *>::size_type iLib = 0; iLib < oLib.size(); iLib++) {
if (iCurrent[iLib] == INVALID_INDEX)
continue;
if (iCurrent[iLib] >= narticles(iLib) || iCurrent[iLib] < 0)
continue;
if (poCurrentWord == nullptr) {
poCurrentWord = poGetWord(iCurrent[iLib], iLib);
} else {
word = poGetWord(iCurrent[iLib], iLib);
if (stardict_strcmp(poCurrentWord, word) > 0)
poCurrentWord = word;
}
}
return poCurrentWord;
}
const gchar *Libs::poGetNextWord(const gchar *sWord, glong *iCurrent)
{
// the input can be:
// (word,iCurrent),read word,write iNext to iCurrent,and return next word. used by TopWin::NextCallback();
// (nullptr,iCurrent),read iCurrent,write iNext to iCurrent,and return next word. used by AppCore::ListWords();
const gchar *poCurrentWord = nullptr;
size_t iCurrentLib = 0;
const gchar *word;
for (size_t iLib = 0; iLib < oLib.size(); ++iLib) {
if (sWord)
oLib[iLib]->Lookup(sWord, iCurrent[iLib]);
if (iCurrent[iLib] == INVALID_INDEX)
continue;
if (iCurrent[iLib] >= narticles(iLib) || iCurrent[iLib] < 0)
continue;
if (poCurrentWord == nullptr) {
poCurrentWord = poGetWord(iCurrent[iLib], iLib);
iCurrentLib = iLib;
} else {
word = poGetWord(iCurrent[iLib], iLib);
if (stardict_strcmp(poCurrentWord, word) > 0) {
poCurrentWord = word;
iCurrentLib = iLib;
}
}
}
if (poCurrentWord) {
iCurrent[iCurrentLib]++;
for (std::vector<Dict *>::size_type iLib = 0; iLib < oLib.size(); iLib++) {
if (iLib == iCurrentLib)
continue;
if (iCurrent[iLib] == INVALID_INDEX)
continue;
if (iCurrent[iLib] >= narticles(iLib) || iCurrent[iLib] < 0)
continue;
if (strcmp(poCurrentWord, poGetWord(iCurrent[iLib], iLib)) == 0)
iCurrent[iLib]++;
}
poCurrentWord = poGetCurrentWord(iCurrent);
}
return poCurrentWord;
}
const gchar *
Libs::poGetPreWord(glong *iCurrent)
{
// used by TopWin::PreviousCallback(); the iCurrent is cached by AppCore::TopWinWordChange();
const gchar *poCurrentWord = nullptr;
std::vector<Dict *>::size_type iCurrentLib = 0;
const gchar *word;
for (std::vector<Dict *>::size_type iLib = 0; iLib < oLib.size(); iLib++) {
if (iCurrent[iLib] == INVALID_INDEX)
iCurrent[iLib] = narticles(iLib);
else {
if (iCurrent[iLib] > narticles(iLib) || iCurrent[iLib] <= 0)
continue;
}
if (poCurrentWord == nullptr) {
poCurrentWord = poGetWord(iCurrent[iLib] - 1, iLib);
iCurrentLib = iLib;
} else {
word = poGetWord(iCurrent[iLib] - 1, iLib);
if (stardict_strcmp(poCurrentWord, word) < 0) {
poCurrentWord = word;
iCurrentLib = iLib;
}
}
}
if (poCurrentWord) {
iCurrent[iCurrentLib]--;
for (std::vector<Dict *>::size_type iLib = 0; iLib < oLib.size(); iLib++) {
if (iLib == iCurrentLib)
continue;
if (iCurrent[iLib] > narticles(iLib) || iCurrent[iLib] <= 0)
continue;
if (strcmp(poCurrentWord, poGetWord(iCurrent[iLib] - 1, iLib)) == 0) {
iCurrent[iLib]--;
} else {
if (iCurrent[iLib] == narticles(iLib))
iCurrent[iLib] = INVALID_INDEX;
}
}
}
return poCurrentWord;
}
bool Libs::LookupSimilarWord(const gchar *sWord, glong &iWordIndex, int iLib)
{
glong iIndex;
bool bFound = false;
gchar *casestr;
@@ -1096,7 +1032,7 @@ bool Libs::LookupSimilarWord(const gchar *sWord, glong &iWordIndex, int iLib)
// to lower case.
casestr = g_utf8_strdown(sWord, -1);
if (strcmp(casestr, sWord)) {
if (oLib[iLib]->Lookup(casestr, iIndex))
if (oLib[iLib]->Lookup(casestr, iWordIndices))
bFound = true;
}
g_free(casestr);
@@ -1104,7 +1040,7 @@ bool Libs::LookupSimilarWord(const gchar *sWord, glong &iWordIndex, int iLib)
if (!bFound) {
casestr = g_utf8_strup(sWord, -1);
if (strcmp(casestr, sWord)) {
if (oLib[iLib]->Lookup(casestr, iIndex))
if (oLib[iLib]->Lookup(casestr, iWordIndices))
bFound = true;
}
g_free(casestr);
@@ -1118,7 +1054,7 @@ bool Libs::LookupSimilarWord(const gchar *sWord, glong &iWordIndex, int iLib)
g_free(firstchar);
g_free(nextchar);
if (strcmp(casestr, sWord)) {
if (oLib[iLib]->Lookup(casestr, iIndex))
if (oLib[iLib]->Lookup(casestr, iWordIndices))
bFound = true;
}
g_free(casestr);
@@ -1138,12 +1074,12 @@ bool Libs::LookupSimilarWord(const gchar *sWord, glong &iWordIndex, int iLib)
if (isupcase || sWord[iWordLen - 1] == 's' || !strncmp(&sWord[iWordLen - 2], "ed", 2)) {
strcpy(sNewWord, sWord);
sNewWord[iWordLen - 1] = '\0'; // cut "s" or "d"
if (oLib[iLib]->Lookup(sNewWord, iIndex))
if (oLib[iLib]->Lookup(sNewWord, iWordIndices))
bFound = true;
else if (isupcase || g_ascii_isupper(sWord[0])) {
casestr = g_ascii_strdown(sNewWord, -1);
if (strcmp(casestr, sNewWord)) {
if (oLib[iLib]->Lookup(casestr, iIndex))
if (oLib[iLib]->Lookup(casestr, iWordIndices))
bFound = true;
}
g_free(casestr);
@@ -1161,13 +1097,13 @@ bool Libs::LookupSimilarWord(const gchar *sWord, glong &iWordIndex, int iLib)
&& !bIsVowel(sNewWord[iWordLen - 4]) && bIsVowel(sNewWord[iWordLen - 5])) { // doubled
sNewWord[iWordLen - 3] = '\0';
if (oLib[iLib]->Lookup(sNewWord, iIndex))
if (oLib[iLib]->Lookup(sNewWord, iWordIndices))
bFound = true;
else {
if (isupcase || g_ascii_isupper(sWord[0])) {
casestr = g_ascii_strdown(sNewWord, -1);
if (strcmp(casestr, sNewWord)) {
if (oLib[iLib]->Lookup(casestr, iIndex))
if (oLib[iLib]->Lookup(casestr, iWordIndices))
bFound = true;
}
g_free(casestr);
@@ -1177,12 +1113,12 @@ bool Libs::LookupSimilarWord(const gchar *sWord, glong &iWordIndex, int iLib)
}
}
if (!bFound) {
if (oLib[iLib]->Lookup(sNewWord, iIndex))
if (oLib[iLib]->Lookup(sNewWord, iWordIndices))
bFound = true;
else if (isupcase || g_ascii_isupper(sWord[0])) {
casestr = g_ascii_strdown(sNewWord, -1);
if (strcmp(casestr, sNewWord)) {
if (oLib[iLib]->Lookup(casestr, iIndex))
if (oLib[iLib]->Lookup(casestr, iWordIndices))
bFound = true;
}
g_free(casestr);
@@ -1200,13 +1136,13 @@ bool Libs::LookupSimilarWord(const gchar *sWord, glong &iWordIndex, int iLib)
if (iWordLen > 6 && (sNewWord[iWordLen - 4] == sNewWord[iWordLen - 5])
&& !bIsVowel(sNewWord[iWordLen - 5]) && bIsVowel(sNewWord[iWordLen - 6])) { // doubled
sNewWord[iWordLen - 4] = '\0';
if (oLib[iLib]->Lookup(sNewWord, iIndex))
if (oLib[iLib]->Lookup(sNewWord, iWordIndices))
bFound = true;
else {
if (isupcase || g_ascii_isupper(sWord[0])) {
casestr = g_ascii_strdown(sNewWord, -1);
if (strcmp(casestr, sNewWord)) {
if (oLib[iLib]->Lookup(casestr, iIndex))
if (oLib[iLib]->Lookup(casestr, iWordIndices))
bFound = true;
}
g_free(casestr);
@@ -1216,12 +1152,12 @@ bool Libs::LookupSimilarWord(const gchar *sWord, glong &iWordIndex, int iLib)
}
}
if (!bFound) {
if (oLib[iLib]->Lookup(sNewWord, iIndex))
if (oLib[iLib]->Lookup(sNewWord, iWordIndices))
bFound = true;
else if (isupcase || g_ascii_isupper(sWord[0])) {
casestr = g_ascii_strdown(sNewWord, -1);
if (strcmp(casestr, sNewWord)) {
if (oLib[iLib]->Lookup(casestr, iIndex))
if (oLib[iLib]->Lookup(casestr, iWordIndices))
bFound = true;
}
g_free(casestr);
@@ -1232,12 +1168,12 @@ bool Libs::LookupSimilarWord(const gchar *sWord, glong &iWordIndex, int iLib)
strcat(sNewWord, "E"); // add a char "E"
else
strcat(sNewWord, "e"); // add a char "e"
if (oLib[iLib]->Lookup(sNewWord, iIndex))
if (oLib[iLib]->Lookup(sNewWord, iWordIndices))
bFound = true;
else if (isupcase || g_ascii_isupper(sWord[0])) {
casestr = g_ascii_strdown(sNewWord, -1);
if (strcmp(casestr, sNewWord)) {
if (oLib[iLib]->Lookup(casestr, iIndex))
if (oLib[iLib]->Lookup(casestr, iWordIndices))
bFound = true;
}
g_free(casestr);
@@ -1252,12 +1188,12 @@ bool Libs::LookupSimilarWord(const gchar *sWord, glong &iWordIndex, int iLib)
if (isupcase || (!strncmp(&sWord[iWordLen - 2], "es", 2) && (sWord[iWordLen - 3] == 's' || sWord[iWordLen - 3] == 'x' || sWord[iWordLen - 3] == 'o' || (iWordLen > 4 && sWord[iWordLen - 3] == 'h' && (sWord[iWordLen - 4] == 'c' || sWord[iWordLen - 4] == 's'))))) {
strcpy(sNewWord, sWord);
sNewWord[iWordLen - 2] = '\0';
if (oLib[iLib]->Lookup(sNewWord, iIndex))
if (oLib[iLib]->Lookup(sNewWord, iWordIndices))
bFound = true;
else if (isupcase || g_ascii_isupper(sWord[0])) {
casestr = g_ascii_strdown(sNewWord, -1);
if (strcmp(casestr, sNewWord)) {
if (oLib[iLib]->Lookup(casestr, iIndex))
if (oLib[iLib]->Lookup(casestr, iWordIndices))
bFound = true;
}
g_free(casestr);
@@ -1274,13 +1210,13 @@ bool Libs::LookupSimilarWord(const gchar *sWord, glong &iWordIndex, int iLib)
if (iWordLen > 5 && (sNewWord[iWordLen - 3] == sNewWord[iWordLen - 4])
&& !bIsVowel(sNewWord[iWordLen - 4]) && bIsVowel(sNewWord[iWordLen - 5])) { // doubled
sNewWord[iWordLen - 3] = '\0';
if (oLib[iLib]->Lookup(sNewWord, iIndex))
if (oLib[iLib]->Lookup(sNewWord, iWordIndices))
bFound = true;
else {
if (isupcase || g_ascii_isupper(sWord[0])) {
casestr = g_ascii_strdown(sNewWord, -1);
if (strcmp(casestr, sNewWord)) {
if (oLib[iLib]->Lookup(casestr, iIndex))
if (oLib[iLib]->Lookup(casestr, iWordIndices))
bFound = true;
}
g_free(casestr);
@@ -1290,12 +1226,12 @@ bool Libs::LookupSimilarWord(const gchar *sWord, glong &iWordIndex, int iLib)
}
}
if (!bFound) {
if (oLib[iLib]->Lookup(sNewWord, iIndex))
if (oLib[iLib]->Lookup(sNewWord, iWordIndices))
bFound = true;
else if (isupcase || g_ascii_isupper(sWord[0])) {
casestr = g_ascii_strdown(sNewWord, -1);
if (strcmp(casestr, sNewWord)) {
if (oLib[iLib]->Lookup(casestr, iIndex))
if (oLib[iLib]->Lookup(casestr, iWordIndices))
bFound = true;
}
g_free(casestr);
@@ -1314,12 +1250,12 @@ bool Libs::LookupSimilarWord(const gchar *sWord, glong &iWordIndex, int iLib)
strcat(sNewWord, "Y"); // add a char "Y"
else
strcat(sNewWord, "y"); // add a char "y"
if (oLib[iLib]->Lookup(sNewWord, iIndex))
if (oLib[iLib]->Lookup(sNewWord, iWordIndices))
bFound = true;
else if (isupcase || g_ascii_isupper(sWord[0])) {
casestr = g_ascii_strdown(sNewWord, -1);
if (strcmp(casestr, sNewWord)) {
if (oLib[iLib]->Lookup(casestr, iIndex))
if (oLib[iLib]->Lookup(casestr, iWordIndices))
bFound = true;
}
g_free(casestr);
@@ -1337,12 +1273,12 @@ bool Libs::LookupSimilarWord(const gchar *sWord, glong &iWordIndex, int iLib)
strcat(sNewWord, "Y"); // add a char "Y"
else
strcat(sNewWord, "y"); // add a char "y"
if (oLib[iLib]->Lookup(sNewWord, iIndex))
if (oLib[iLib]->Lookup(sNewWord, iWordIndices))
bFound = true;
else if (isupcase || g_ascii_isupper(sWord[0])) {
casestr = g_ascii_strdown(sNewWord, -1);
if (strcmp(casestr, sNewWord)) {
if (oLib[iLib]->Lookup(casestr, iIndex))
if (oLib[iLib]->Lookup(casestr, iWordIndices))
bFound = true;
}
g_free(casestr);
@@ -1356,12 +1292,12 @@ bool Libs::LookupSimilarWord(const gchar *sWord, glong &iWordIndex, int iLib)
if (isupcase || (!strncmp(&sWord[iWordLen - 2], "er", 2))) {
strcpy(sNewWord, sWord);
sNewWord[iWordLen - 2] = '\0';
if (oLib[iLib]->Lookup(sNewWord, iIndex))
if (oLib[iLib]->Lookup(sNewWord, iWordIndices))
bFound = true;
else if (isupcase || g_ascii_isupper(sWord[0])) {
casestr = g_ascii_strdown(sNewWord, -1);
if (strcmp(casestr, sNewWord)) {
if (oLib[iLib]->Lookup(casestr, iIndex))
if (oLib[iLib]->Lookup(casestr, iWordIndices))
bFound = true;
}
g_free(casestr);
@@ -1375,12 +1311,12 @@ bool Libs::LookupSimilarWord(const gchar *sWord, glong &iWordIndex, int iLib)
if (isupcase || (!strncmp(&sWord[iWordLen - 3], "est", 3))) {
strcpy(sNewWord, sWord);
sNewWord[iWordLen - 3] = '\0';
if (oLib[iLib]->Lookup(sNewWord, iIndex))
if (oLib[iLib]->Lookup(sNewWord, iWordIndices))
bFound = true;
else if (isupcase || g_ascii_isupper(sWord[0])) {
casestr = g_ascii_strdown(sNewWord, -1);
if (strcmp(casestr, sNewWord)) {
if (oLib[iLib]->Lookup(casestr, iIndex))
if (oLib[iLib]->Lookup(casestr, iWordIndices))
bFound = true;
}
g_free(casestr);
@@ -1390,9 +1326,6 @@ bool Libs::LookupSimilarWord(const gchar *sWord, glong &iWordIndex, int iLib)
g_free(sNewWord);
}
if (bFound)
iWordIndex = iIndex;
#if 0
else {
//don't change iWordIndex here.
@@ -1403,11 +1336,11 @@ bool Libs::LookupSimilarWord(const gchar *sWord, glong &iWordIndex, int iLib)
return bFound;
}
bool Libs::SimpleLookupWord(const gchar *sWord, glong &iWordIndex, int iLib)
bool Libs::SimpleLookupWord(const gchar *sWord, std::set<glong> &iWordIndices, int iLib)
{
bool bFound = oLib[iLib]->Lookup(sWord, iWordIndex);
bool bFound = oLib[iLib]->Lookup(sWord, iWordIndices);
if (!bFound && fuzzy_)
bFound = LookupSimilarWord(sWord, iWordIndex, iLib);
bFound = LookupSimilarWord(sWord, iWordIndices, iLib);
return bFound;
}

View File

@@ -1,11 +1,10 @@
#pragma once
#include <cstdio>
#include <cstring>
#include <functional>
#include <list>
#include <map>
#include <memory>
#include <set>
#include <string>
#include <vector>
@@ -78,8 +77,8 @@ struct DictInfo {
std::string website;
std::string date;
std::string description;
guint32 index_file_size;
guint32 syn_file_size;
off_t index_file_size;
off_t syn_file_size;
std::string sametypesequence;
bool load_from_ifo_file(const std::string &ifofilename, bool istreedict);
@@ -92,21 +91,31 @@ public:
guint32 wordentry_size;
virtual ~IIndexFile() {}
virtual bool load(const std::string &url, gulong wc, gulong fsize, bool verbose) = 0;
virtual bool load(const std::string &url, gulong wc, off_t fsize, bool verbose) = 0;
virtual const gchar *get_key(glong idx) = 0;
virtual void get_data(glong idx) = 0;
virtual const gchar *get_key_and_data(glong idx) = 0;
virtual bool lookup(const char *str, glong &idx) = 0;
virtual bool lookup(const char *str, std::set<glong> &idxs, glong &next_idx) = 0;
virtual bool lookup(const char *str, std::set<glong> &idxs)
{
glong unused_next_idx;
return lookup(str, idxs, unused_next_idx);
};
};
class SynFile
{
public:
SynFile() {}
~SynFile() {}
bool load(const std::string &url, gulong wc);
bool lookup(const char *str, glong &idx);
bool lookup(const char *str, std::set<glong> &idxs, glong &next_idx);
bool lookup(const char *str, std::set<glong> &idxs);
const gchar *get_key(glong idx) { return synlist[idx]; }
private:
std::map<std::string, gulong> synonyms;
MapFile synfile;
std::vector<gchar *> synlist;
};
class Dict : public DictBase
@@ -133,7 +142,12 @@ public:
*offset = idx_file->wordentry_offset;
*size = idx_file->wordentry_size;
}
bool Lookup(const char *str, glong &idx);
bool Lookup(const char *str, std::set<glong> &idxs, glong &next_idx);
bool Lookup(const char *str, std::set<glong> &idxs)
{
glong unused_next_idx;
return Lookup(str, idxs, unused_next_idx);
}
bool LookupWithRule(GPatternSpec *pspec, glong *aIndex, int iBuffLen);
@@ -146,7 +160,7 @@ private:
std::unique_ptr<IIndexFile> idx_file;
std::unique_ptr<SynFile> syn_file;
bool load_ifofile(const std::string &ifofilename, gulong &idxfilesize);
bool load_ifofile(const std::string &ifofilename, off_t &idxfilesize);
};
class Libs
@@ -181,15 +195,12 @@ public:
return nullptr;
return oLib[iLib]->get_data(iIndex);
}
const gchar *poGetCurrentWord(glong *iCurrent);
const gchar *poGetNextWord(const gchar *word, glong *iCurrent);
const gchar *poGetPreWord(glong *iCurrent);
bool LookupWord(const gchar *sWord, glong &iWordIndex, int iLib)
bool LookupWord(const gchar *sWord, std::set<glong> &iWordIndices, int iLib)
{
return oLib[iLib]->Lookup(sWord, iWordIndex);
return oLib[iLib]->Lookup(sWord, iWordIndices);
}
bool LookupSimilarWord(const gchar *sWord, glong &iWordIndex, int iLib);
bool SimpleLookupWord(const gchar *sWord, glong &iWordIndex, int iLib);
bool LookupSimilarWord(const gchar *sWord, std::set<glong> &iWordIndices, int iLib);
bool SimpleLookupWord(const gchar *sWord, std::set<glong> &iWordIndices, int iLib);
bool LookupWithFuzzy(const gchar *sWord, gchar *reslist[], gint reslist_size);
gint LookupWithRule(const gchar *sWord, gchar *reslist[]);

View File

@@ -0,0 +1,9 @@
StarDict's dict ifo file
version=3.0.0
bookname=Russian-English Dictionary (ru-en)
wordcount=415144
idxfilesize=12344255
sametypesequence=h
synwordcount=1277580
author=Vuizur
description=

Binary file not shown.

Binary file not shown.

View File

@@ -0,0 +1,7 @@
StarDict's dict ifo file
version=3.0.0
bookname=Test multiple results
wordcount=246
idxfilesize=5977
synwordcount=124
description=

Binary file not shown.

View File

@@ -18,8 +18,15 @@ test_json() {
fi
}
test_json '[{"name": "Test synonyms", "wordcount": "2"},{"name": "Sample 1 test dictionary", "wordcount": "1"},{"name": "test_dict", "wordcount": "1"}]' -x -j -l -n --data-dir "$TEST_DIR"
test_json '[{"name": "Russian-English Dictionary (ru-en)", "wordcount": "415144"},
{"name": "Test synonyms", "wordcount": "2"},
{"name": "Test multiple results", "wordcount": "246"},
{"name": "Sample 1 test dictionary", "wordcount": "1"},
{"name": "test_dict", "wordcount": "1"}]' -x -j -l -n --data-dir "$TEST_DIR"
test_json '[{"dict": "Test synonyms","word":"test","definition":"\u000aresult of test"}]' -x -j -n --data-dir "$TEST_DIR" foo
test_json '[]' -x -j -n --data-dir "$TEST_DIR" foobarbaaz
# Test multiple searches, with the first failing.
test_json '[][{"dict": "Test synonyms","word":"test","definition":"\u000aresult of test"}]' -x -j -n --data-dir "$TEST_DIR" foobarbaaz foo
exit 0

67
tests/t_multiple_results Executable file
View File

@@ -0,0 +1,67 @@
#!/bin/sh
set -e
SDCV="$1"
TEST_DIR="$2"
unset SDCV_PAGER
unset STARDICT_DATA_DIR
test_json() {
word="$1"
jq_cmp="$2"
result="$("$SDCV" --data-dir "$TEST_DIR" -exjn "$word" | sed 's|\\n|\\u000a|g')"
cmp_result="$(echo "$result" | jq "$jq_cmp")"
if [ "$cmp_result" != "true" ]; then
echo "expected '$jq_cmp' to return true, but $result didn't"
exit 1
fi
}
# Basic two-result search for the same headword.
test_json bark \
'. == [
{"dict":"Test multiple results","word":"bark","definition":"\u000aThe harsh sound made by a dog."},
{"dict":"Test multiple results","word":"bark","definition":"\u000aThe tough outer covering of trees and other woody plants."}
]'
# Multi-result search where one word exists as both a synyonym and a separate
# headword. This ensures that if there is a matching synyonym we don't skip the
# regular search.
test_json cat \
'. == [
{"dict":"Test multiple results","word":"cat","definition":"\u000aA cute animal which (rarely) barks."},
{"dict":"Test multiple results","word":"lion","definition":"\u000aA larger cat which might bite your head off."},
{"dict":"Test multiple results","word":"panther","definition":"\u000aI know very little about panthers, sorry."}
]'
# Many-result search for a word that matches 120 distinct headwords.
test_json many_headwords 'length == 120'
test_json many_headwords 'all(.word == "many_headwords")'
test_json many_headwords \
'to_entries | map(.value.definition == "\u000aDefinition for [many_headwords] entry #\(.key+1) (same headword).") | all'
# Many-result search for 120 words that have the same synonym.
test_json many_synonyms 'length == 120'
test_json many_synonyms \
'to_entries | map(.value.word == "many_synonyms-\(.key+101)") | all'
test_json many_synonyms \
'to_entries | map(.value.definition == "\u000aDefinition for [many_synonyms-\(.key+101)] (same synonym).") | all'
# Ensure that we don't return more than one result even if a word can be
# resolved in more than one way.
#
# Most well-formed dictionaries don't have entries like this (it basically
# requires you to have a dictionary where there is a synonym that is identical
# to a word's headword or multiple identical synyonym entries).
#
# This entry was created by creating extra synonyms with different names then
# modifying the .syn file manually.
test_json many_resolution_paths \
'. == [
{"dict":"Test multiple results","word":"many_resolution_paths",
"definition":"\u000aDefinition for [many_resolution_paths] headword (same word, multiple synonym entries)."}
]'
exit 0

18
tests/t_newlines_in_ifo Executable file
View File

@@ -0,0 +1,18 @@
#!/bin/sh
set -e
PATH_TO_SDCV="$1"
TEST_DIR="$2"
unset SDCV_PAGER
unset STARDICT_DATA_DIR
RES=$("$PATH_TO_SDCV" -n -x --data-dir="$TEST_DIR/not-unix-newlines-ifo" -l | tail -n 1)
if [ "$RES" = "Russian-English Dictionary (ru-en) 415144" ]; then
exit 0
else
echo "test failed, unexpected result: $RES" >&2
exit 1
fi