What Does ## in a #Define Mean

Hash symbol after define macro?

This is stringize operation, it will produce a string literal from macro parameter, e.g. "n". Two lines are required to allow extra expantion of macro parameter, for example:

// prints __LINE__ (not expanded)
std::cout << STRING(__LINE__) << std::endl;
// prints 42 (line number)
std::cout << MAKESTRING(__LINE__) << std::endl;

What does hash do in python?

A hash is an fixed sized integer that identifies a particular value. Each value needs to have its own hash, so for the same value you will get the same hash even if it's not the same object.

>>> hash("Look at me!")
4343814758193556824
>>> f = "Look at me!"
>>> hash(f)
4343814758193556824

Hash values need to be created in such a way that the resulting values are evenly distributed to reduce the number of hash collisions you get. Hash collisions are when two different values have the same hash. Therefore, relatively small changes often result in very different hashes.

>>> hash("Look at me!!")
6941904779894686356

These numbers are very useful, as they enable quick look-up of values in a large collection of values. Two examples of their use are Python's set and dict. In a list, if you want to check if a value is in the list, with if x in values:, Python needs to go through the whole list and compare x with each value in the list values. This can take a long time for a long list. In a set, Python keeps track of each hash, and when you type if x in values:, Python will get the hash-value for x, look that up in an internal structure and then only compare x with the values that have the same hash as x.

The same methodology is used for dictionary lookup. This makes lookup in set and dict very fast, while lookup in list is slow. It also means you can have non-hashable objects in a list, but not in a set or as keys in a dict. The typical example of non-hashable objects is any object that is mutable, meaning that you can change its value. If you have a mutable object it should not be hashable, as its hash then will change over its life-time, which would cause a lot of confusion, as an object could end up under the wrong hash value in a dictionary.

Note that the hash of a value only needs to be the same for one run of Python. In Python 3.3 they will in fact change for every new run of Python:

$ /opt/python33/bin/python3
Python 3.3.2 (default, Jun 17 2013, 17:49:21)
[GCC 4.6.3] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> hash("foo")
1849024199686380661
>>>
$ /opt/python33/bin/python3
Python 3.3.2 (default, Jun 17 2013, 17:49:21)
[GCC 4.6.3] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> hash("foo")
-7416743951976404299

This is to make is harder to guess what hash value a certain string will have, which is an important security feature for web applications etc.

Hash values should therefore not be stored permanently. If you need to use hash values in a permanent way you can take a look at the more "serious" types of hashes, cryptographic hash functions, that can be used for making verifiable checksums of files etc.

What does ## (double hash) do in a preprocessor directive?

## is the preprocessor operator for concatenation.

So if you use

DEFINE_STAT(foo)

anywhere in the code, it gets replaced with

struct FThreadSafeStaticStat<FStat_foo> StatPtr_foo;

before your code is compiled.

Here is another example from a blog post of mine to explain this further.

#include <stdio.h>

#define decode(s,t,u,m,p,e,d) m ## s ## u ## t
#define begin decode(a,n,i,m,a,t,e)

int begin()
{
printf("Stumped?\n");
}

This program would compile and execute successfully, and produce the following output:

Stumped?

When the preprocessor is invoked on this code,

  • begin is replaced with decode(a,n,i,m,a,t,e)
  • decode(a,n,i,m,a,t,e) is replaced with m ## a ## i ## n
  • m ## a ## i ## n is replaced with main

Thus effectively, begin() is replaced with main().

How to hash a class or function definition?

All you’re looking for is a hash procedure that includes all the salient details of the class’s definition. (Base classes can be included by including their definitions recursively.) To minimize false matches, the basic idea is to apply a wide (cryptographic) hash to a serialization of your class. So start with pickle: it supports more types than hash and, when it uses identity, it uses a reproducible identity based on name. This makes it a good candidate for the base case of a recursive strategy: deal with the functions and classes whose contents are important and let it handle any ancillary objects referenced.

So define a serialization by cases. Call an object special if it falls under any case below but the last.

  • For a tuple deemed to contain special objects:

    1. The character t
    2. The serialization of its len
    3. The serialization of each element, in order
  • For a dict deemed to contain special objects:

    1. The character d
    2. The serialization of its len
    3. The serialization of each name and value, in sorted order
  • For a class whose definition is salient:

    1. The character C
    2. The serialization of its __bases__
    3. The serialization of its vars
  • For a function whose definition is salient:

    1. The character f
    2. The serialization of its __defaults__
    3. The serialization of its __kwdefaults__ (in Python 3)
    4. The serialization of its __closure__ (but with cell values instead of the cells themselves)
    5. The serialization of its vars
    6. The serialization of its __code__
  • For a code object (since pickle doesn’t support them at all):

    1. The character c
    2. The serializations of its co_argcount, co_nlocals, co_flags, co_code, co_consts, co_names, co_freevars, and co_cellvars, in that order; none of these are ever special
  • For a static or class method object:

    1. The character s or m
    2. The serialization of its __func__
  • For a property:

    1. The character p
    2. The serializations of its fget, fset, and fdel, in that order
  • For any other object: pickle.dumps(x,-1)

(You never actually store all this: just create a hashlib object of your choice in the top-level function, and in the recursive part update it with each piece of the serialization in turn.)

The type tags are to avoid collisions and in particular to be prefix-free. Binary pickles are already prefix-free. You can base the decision about a container on a deterministic analysis of its contents (even if heuristic) or on context, so long as you’re consistent.

As always, there is something of an art to balancing false positives against false negatives: for a function, you could include __globals__ (with pruning of objects already serialized to avoid large if not infinite serializations) or just any __name__ found therein. Omitting co_varnames ignores renaming local variables, which is good unless introspection is important; similarly for co_filename and co_name.

You may need to support more types: look for static attributes and default arguments that don’t pickle correctly (because they contain references to special types) or at all. Note of course that some types (like file objects) are unpicklable because it’s difficult or impossible to serialize them (although unlike pickle you can handle lambdas just like any other function once you’ve done code objects). At some risk of false matches, you can choose to serialize just the type of such objects (as always, prefixed with a character ? to distinguish from actually having the type in that position).

What is the purpose of a single pound/hash sign (#) on its own line in the C/C++ preprocessor?

A # on its own on a line has no effect at all. I assume it's being used for aesthetic value.

The C standard says:

6.10.7 Null directive

Semantics

A preprocessing directive of the form

# new-line

has no effect.

The C++ standard says the same thing:

16.7 Null directive [cpp.null]

A preprocessing directive of the form

# new-line

has no effect.

What does hashable mean in Python?

From the Python glossary:

An object is hashable if it has a hash value which never changes during its lifetime (it needs a __hash__() method), and can be compared to other objects (it needs an __eq__() or __cmp__() method). Hashable objects which compare equal must have the same hash value.

Hashability makes an object usable as a dictionary key and a set member, because these data structures use the hash value internally.

All of Python’s immutable built-in objects are hashable, while no mutable containers (such as lists or dictionaries) are. Objects which are instances of user-defined classes are hashable by default; they all compare unequal, and their hash value is their id().

Chef/Ruby: Reference to Definition in Hash

You definition takes an argument named :config_name

define :foo_definition, :config_name => nil do

And you call it without any argument in your def_map:

 'foo_name' => foo_definition,

So when the definition is called there (in you map), config_name is nil, and this line

print("foo_definition: #{node[:configs][config_name][:def_name]}")

is called as:

print("foo_definition: #{node[:configs][nil][:def_name]}")

A workaround could be to use this kind of code (untested):

for config_name in node[:configs].keys do
def_name = node[:configs][config_name][:def_name]
send(def_name) config_name do
config_name config_name
end
end

If the definition is not known in the recipe context, this should raise an exception, you may wrap it in a try/catch block if you want to not abort the run and just log about it.

Syntax complication with hash(#) in C programming

## and # -- C Macro Operators that Simplify Initializing Memory (as well as being applicable in other creative ways limited only by one's imagination)

## is concatenation and # takes the associated #define macro parameter and substitutes the unquoted parameter input as a null-terminated string (e.g. puts the parameter text in double-quotes "" (see example below). That is very useful in some cases to create powerful macros that can help initialize struct arrays to save typing and make the code more concise and readable.

#define osThreadDef(name, thread, priority, instances, stacksz)  \
osThreadDef_t os_thread_def_##name = \{ #name, (thread), (priority), (instances), (stacksz) }

Based on that definition we can assume it's written to help initialize a structure whose definition looks something like this:

typedef struct osThreadDef {
char *threadName;
thread_t *threadPtr;
int threadPrio;
int threadInstanceCnt;
int stackSize;
} osThreadDef_t;

Example of using the macro in code:

osThreadDef(foobar, &myThr, 5, 10, 100);

That would result in this after pre-processing:

osThreadDef_t os_thread_def_foobar = { 
"foobar", &myThr, 5, 10, 100 };

In reality, the macro expansion would result in ( ) around the each of last four arguments, but I didn't show that because would just clutter the example. They're added to each parameter in the macro to keep macro-expansion from creating some weird problems in some corner cases, where strange stuff is passed to the macro.

Wrapping macro arguments in ( ) is often done for safety because some kinds of arguments can be problematic. But simple cases like the example don't really require each argument to be wrapped in parenthesis.



Related Topics



Leave a reply



Submit